00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v22.11" build number 147 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3649 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.083 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.084 The recommended git tool is: git 00:00:00.084 using credential 00000000-0000-0000-0000-000000000002 00:00:00.086 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.164 Fetching changes from the remote Git repository 00:00:00.167 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.238 Using shallow fetch with depth 1 00:00:00.238 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.238 > git --version # timeout=10 00:00:00.312 > git --version # 'git version 2.39.2' 00:00:00.312 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.364 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.364 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.010 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.027 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.041 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.041 > git config core.sparsecheckout # timeout=10 00:00:06.055 > git read-tree -mu HEAD # timeout=10 00:00:06.074 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.100 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.100 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.185 [Pipeline] Start of Pipeline 00:00:06.199 [Pipeline] library 00:00:06.201 Loading library shm_lib@master 00:00:06.201 Library shm_lib@master is cached. Copying from home. 00:00:06.219 [Pipeline] node 00:00:06.231 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.233 [Pipeline] { 00:00:06.243 [Pipeline] catchError 00:00:06.244 [Pipeline] { 00:00:06.257 [Pipeline] wrap 00:00:06.266 [Pipeline] { 00:00:06.274 [Pipeline] stage 00:00:06.276 [Pipeline] { (Prologue) 00:00:06.546 [Pipeline] sh 00:00:06.831 + logger -p user.info -t JENKINS-CI 00:00:06.849 [Pipeline] echo 00:00:06.850 Node: CYP9 00:00:06.857 [Pipeline] sh 00:00:07.161 [Pipeline] setCustomBuildProperty 00:00:07.173 [Pipeline] echo 00:00:07.175 Cleanup processes 00:00:07.180 [Pipeline] sh 00:00:07.466 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.466 2313684 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.480 [Pipeline] sh 00:00:07.771 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.771 ++ grep -v 'sudo pgrep' 00:00:07.771 ++ awk '{print $1}' 00:00:07.771 + sudo kill -9 00:00:07.771 + true 00:00:07.786 [Pipeline] cleanWs 00:00:07.795 [WS-CLEANUP] Deleting project workspace... 00:00:07.795 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.802 [WS-CLEANUP] done 00:00:07.805 [Pipeline] setCustomBuildProperty 00:00:07.817 [Pipeline] sh 00:00:08.101 + sudo git config --global --replace-all safe.directory '*' 00:00:08.179 [Pipeline] httpRequest 00:00:11.247 [Pipeline] echo 00:00:11.248 Sorcerer 10.211.164.101 is dead 00:00:11.257 [Pipeline] httpRequest 00:00:13.525 [Pipeline] echo 00:00:13.526 Sorcerer 10.211.164.101 is alive 00:00:13.535 [Pipeline] retry 00:00:13.536 [Pipeline] { 00:00:13.546 [Pipeline] httpRequest 00:00:13.550 HttpMethod: GET 00:00:13.550 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.551 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.568 Response Code: HTTP/1.1 200 OK 00:00:13.569 Success: Status code 200 is in the accepted range: 200,404 00:00:13.569 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:31.884 [Pipeline] } 00:00:31.901 [Pipeline] // retry 00:00:31.908 [Pipeline] sh 00:00:32.197 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:32.215 [Pipeline] httpRequest 00:00:34.811 [Pipeline] echo 00:00:34.814 Sorcerer 10.211.164.101 is alive 00:00:34.866 [Pipeline] retry 00:00:34.870 [Pipeline] { 00:00:34.911 [Pipeline] httpRequest 00:00:34.917 HttpMethod: GET 00:00:34.917 URL: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:34.918 Sending request to url: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:34.926 Response Code: HTTP/1.1 200 OK 00:00:34.927 Success: Status code 200 is in the accepted range: 200,404 00:00:34.927 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:03:04.026 [Pipeline] } 00:03:04.045 [Pipeline] // retry 00:03:04.053 [Pipeline] sh 00:03:04.344 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:03:07.669 [Pipeline] sh 00:03:07.960 + git -C spdk log --oneline -n5 00:03:07.960 b18e1bd62 version: v24.09.1-pre 00:03:07.960 19524ad45 version: v24.09 00:03:07.960 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:03:07.960 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:03:07.960 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:03:07.978 [Pipeline] withCredentials 00:03:07.990 > git --version # timeout=10 00:03:08.003 > git --version # 'git version 2.39.2' 00:03:08.031 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:03:08.033 [Pipeline] { 00:03:08.042 [Pipeline] retry 00:03:08.044 [Pipeline] { 00:03:08.057 [Pipeline] sh 00:03:08.555 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:03:11.114 [Pipeline] } 00:03:11.134 [Pipeline] // retry 00:03:11.139 [Pipeline] } 00:03:11.156 [Pipeline] // withCredentials 00:03:11.166 [Pipeline] httpRequest 00:03:14.131 [Pipeline] echo 00:03:14.133 Sorcerer 10.211.164.101 is alive 00:03:14.142 [Pipeline] retry 00:03:14.144 [Pipeline] { 00:03:14.158 [Pipeline] httpRequest 00:03:14.163 HttpMethod: GET 00:03:14.164 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:03:14.164 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:03:14.168 Response Code: HTTP/1.1 200 OK 00:03:14.168 Success: Status code 200 is in the accepted range: 200,404 00:03:14.169 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:03:19.549 [Pipeline] } 00:03:19.567 [Pipeline] // retry 00:03:19.574 [Pipeline] sh 00:03:19.868 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:03:21.802 [Pipeline] sh 00:03:22.093 + git -C dpdk log --oneline -n5 00:03:22.093 caf0f5d395 version: 22.11.4 00:03:22.093 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:03:22.093 dc9c799c7d vhost: fix missing spinlock unlock 00:03:22.093 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:03:22.093 6ef77f2a5e net/gve: fix RX buffer size alignment 00:03:22.104 [Pipeline] } 00:03:22.119 [Pipeline] // stage 00:03:22.128 [Pipeline] stage 00:03:22.130 [Pipeline] { (Prepare) 00:03:22.152 [Pipeline] writeFile 00:03:22.168 [Pipeline] sh 00:03:22.458 + logger -p user.info -t JENKINS-CI 00:03:22.473 [Pipeline] sh 00:03:22.763 + logger -p user.info -t JENKINS-CI 00:03:22.777 [Pipeline] sh 00:03:23.067 + cat autorun-spdk.conf 00:03:23.067 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:23.067 SPDK_TEST_NVMF=1 00:03:23.067 SPDK_TEST_NVME_CLI=1 00:03:23.067 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:23.067 SPDK_TEST_NVMF_NICS=e810 00:03:23.067 SPDK_TEST_VFIOUSER=1 00:03:23.067 SPDK_RUN_UBSAN=1 00:03:23.067 NET_TYPE=phy 00:03:23.067 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:23.067 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:23.076 RUN_NIGHTLY=1 00:03:23.081 [Pipeline] readFile 00:03:23.107 [Pipeline] withEnv 00:03:23.109 [Pipeline] { 00:03:23.122 [Pipeline] sh 00:03:23.414 + set -ex 00:03:23.414 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:23.414 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:23.414 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:23.414 ++ SPDK_TEST_NVMF=1 00:03:23.414 ++ SPDK_TEST_NVME_CLI=1 00:03:23.414 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:23.414 ++ SPDK_TEST_NVMF_NICS=e810 00:03:23.414 ++ SPDK_TEST_VFIOUSER=1 00:03:23.414 ++ SPDK_RUN_UBSAN=1 00:03:23.414 ++ NET_TYPE=phy 00:03:23.414 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:23.414 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:23.414 ++ RUN_NIGHTLY=1 00:03:23.414 + case $SPDK_TEST_NVMF_NICS in 00:03:23.414 + DRIVERS=ice 00:03:23.414 + [[ tcp == \r\d\m\a ]] 00:03:23.414 + [[ -n ice ]] 00:03:23.414 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:23.414 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:23.414 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:23.414 rmmod: ERROR: Module irdma is not currently loaded 00:03:23.414 rmmod: ERROR: Module i40iw is not currently loaded 00:03:23.414 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:23.414 + true 00:03:23.414 + for D in $DRIVERS 00:03:23.414 + sudo modprobe ice 00:03:23.414 + exit 0 00:03:23.425 [Pipeline] } 00:03:23.443 [Pipeline] // withEnv 00:03:23.449 [Pipeline] } 00:03:23.463 [Pipeline] // stage 00:03:23.473 [Pipeline] catchError 00:03:23.475 [Pipeline] { 00:03:23.489 [Pipeline] timeout 00:03:23.489 Timeout set to expire in 1 hr 0 min 00:03:23.491 [Pipeline] { 00:03:23.505 [Pipeline] stage 00:03:23.508 [Pipeline] { (Tests) 00:03:23.525 [Pipeline] sh 00:03:23.819 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:23.819 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:23.819 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:23.819 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:23.819 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:23.819 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:23.819 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:23.819 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:23.819 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:23.819 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:23.819 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:23.819 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:23.819 + source /etc/os-release 00:03:23.819 ++ NAME='Fedora Linux' 00:03:23.819 ++ VERSION='39 (Cloud Edition)' 00:03:23.819 ++ ID=fedora 00:03:23.819 ++ VERSION_ID=39 00:03:23.819 ++ VERSION_CODENAME= 00:03:23.819 ++ PLATFORM_ID=platform:f39 00:03:23.819 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:23.819 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:23.819 ++ LOGO=fedora-logo-icon 00:03:23.819 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:23.819 ++ HOME_URL=https://fedoraproject.org/ 00:03:23.819 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:23.819 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:23.819 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:23.819 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:23.819 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:23.819 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:23.819 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:23.819 ++ SUPPORT_END=2024-11-12 00:03:23.819 ++ VARIANT='Cloud Edition' 00:03:23.819 ++ VARIANT_ID=cloud 00:03:23.819 + uname -a 00:03:23.819 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:23.819 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:27.124 Hugepages 00:03:27.124 node hugesize free / total 00:03:27.124 node0 1048576kB 0 / 0 00:03:27.124 node0 2048kB 0 / 0 00:03:27.124 node1 1048576kB 0 / 0 00:03:27.124 node1 2048kB 0 / 0 00:03:27.124 00:03:27.124 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:27.124 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:27.124 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:27.124 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:27.124 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:27.124 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:27.124 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:27.124 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:27.124 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:27.124 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:27.124 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:27.124 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:27.124 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:27.124 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:27.124 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:27.124 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:27.124 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:27.124 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:27.124 + rm -f /tmp/spdk-ld-path 00:03:27.124 + source autorun-spdk.conf 00:03:27.124 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:27.124 ++ SPDK_TEST_NVMF=1 00:03:27.124 ++ SPDK_TEST_NVME_CLI=1 00:03:27.124 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:27.124 ++ SPDK_TEST_NVMF_NICS=e810 00:03:27.124 ++ SPDK_TEST_VFIOUSER=1 00:03:27.124 ++ SPDK_RUN_UBSAN=1 00:03:27.124 ++ NET_TYPE=phy 00:03:27.124 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:27.124 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:27.124 ++ RUN_NIGHTLY=1 00:03:27.124 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:27.124 + [[ -n '' ]] 00:03:27.124 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:27.124 + for M in /var/spdk/build-*-manifest.txt 00:03:27.124 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:27.124 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:27.124 + for M in /var/spdk/build-*-manifest.txt 00:03:27.124 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:27.124 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:27.124 + for M in /var/spdk/build-*-manifest.txt 00:03:27.124 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:27.124 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:27.124 ++ uname 00:03:27.124 + [[ Linux == \L\i\n\u\x ]] 00:03:27.124 + sudo dmesg -T 00:03:27.124 + sudo dmesg --clear 00:03:27.124 + dmesg_pid=2315744 00:03:27.124 + [[ Fedora Linux == FreeBSD ]] 00:03:27.124 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:27.124 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:27.124 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:27.124 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:27.124 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:27.124 + sudo dmesg -Tw 00:03:27.124 + [[ -x /usr/src/fio-static/fio ]] 00:03:27.124 + export FIO_BIN=/usr/src/fio-static/fio 00:03:27.124 + FIO_BIN=/usr/src/fio-static/fio 00:03:27.124 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:27.124 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:27.124 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:27.124 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:27.124 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:27.125 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:27.125 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:27.125 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:27.125 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:27.125 Test configuration: 00:03:27.125 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:27.125 SPDK_TEST_NVMF=1 00:03:27.125 SPDK_TEST_NVME_CLI=1 00:03:27.125 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:27.125 SPDK_TEST_NVMF_NICS=e810 00:03:27.125 SPDK_TEST_VFIOUSER=1 00:03:27.125 SPDK_RUN_UBSAN=1 00:03:27.125 NET_TYPE=phy 00:03:27.125 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:27.125 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:27.125 RUN_NIGHTLY=1 17:30:27 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:03:27.125 17:30:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:27.125 17:30:27 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:27.125 17:30:27 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:27.125 17:30:27 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:27.125 17:30:27 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:27.125 17:30:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.125 17:30:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.125 17:30:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.125 17:30:27 -- paths/export.sh@5 -- $ export PATH 00:03:27.125 17:30:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.125 17:30:27 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:27.387 17:30:27 -- common/autobuild_common.sh@479 -- $ date +%s 00:03:27.387 17:30:27 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732120227.XXXXXX 00:03:27.387 17:30:27 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732120227.25I6jQ 00:03:27.387 17:30:27 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:03:27.387 17:30:27 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:03:27.387 17:30:27 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:27.387 17:30:27 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:03:27.387 17:30:27 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:27.387 17:30:27 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:27.387 17:30:27 -- common/autobuild_common.sh@495 -- $ get_config_params 00:03:27.387 17:30:27 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:27.387 17:30:27 -- common/autotest_common.sh@10 -- $ set +x 00:03:27.387 17:30:27 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:03:27.387 17:30:27 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:03:27.387 17:30:27 -- pm/common@17 -- $ local monitor 00:03:27.387 17:30:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.387 17:30:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.387 17:30:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.387 17:30:27 -- pm/common@21 -- $ date +%s 00:03:27.387 17:30:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.387 17:30:27 -- pm/common@21 -- $ date +%s 00:03:27.387 17:30:27 -- pm/common@25 -- $ sleep 1 00:03:27.387 17:30:27 -- pm/common@21 -- $ date +%s 00:03:27.387 17:30:27 -- pm/common@21 -- $ date +%s 00:03:27.387 17:30:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732120227 00:03:27.387 17:30:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732120227 00:03:27.387 17:30:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732120227 00:03:27.387 17:30:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732120227 00:03:27.387 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732120227_collect-cpu-load.pm.log 00:03:27.387 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732120227_collect-vmstat.pm.log 00:03:27.387 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732120227_collect-cpu-temp.pm.log 00:03:27.387 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732120227_collect-bmc-pm.bmc.pm.log 00:03:28.332 17:30:28 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:03:28.332 17:30:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:28.332 17:30:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:28.332 17:30:28 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:28.332 17:30:28 -- spdk/autobuild.sh@16 -- $ date -u 00:03:28.332 Wed Nov 20 04:30:28 PM UTC 2024 00:03:28.332 17:30:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:28.332 v24.09-rc1-9-gb18e1bd62 00:03:28.332 17:30:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:28.332 17:30:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:28.332 17:30:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:28.332 17:30:28 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:28.332 17:30:28 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:28.332 17:30:28 -- common/autotest_common.sh@10 -- $ set +x 00:03:28.332 ************************************ 00:03:28.332 START TEST ubsan 00:03:28.332 ************************************ 00:03:28.332 17:30:28 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:28.332 using ubsan 00:03:28.332 00:03:28.332 real 0m0.001s 00:03:28.332 user 0m0.001s 00:03:28.332 sys 0m0.000s 00:03:28.332 17:30:28 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:28.332 17:30:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:28.332 ************************************ 00:03:28.332 END TEST ubsan 00:03:28.332 ************************************ 00:03:28.332 17:30:28 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:03:28.332 17:30:28 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:28.332 17:30:28 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:28.332 17:30:28 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:03:28.332 17:30:28 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:28.332 17:30:28 -- common/autotest_common.sh@10 -- $ set +x 00:03:28.332 ************************************ 00:03:28.332 START TEST build_native_dpdk 00:03:28.332 ************************************ 00:03:28.332 17:30:28 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:03:28.332 17:30:28 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:28.332 17:30:28 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:28.332 17:30:28 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:28.332 17:30:28 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:28.332 17:30:28 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:28.332 17:30:28 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:28.332 17:30:28 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:28.332 17:30:28 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:28.332 17:30:28 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:28.332 17:30:28 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:28.332 17:30:28 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:28.332 17:30:28 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:28.332 17:30:28 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:28.332 17:30:28 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:28.332 17:30:28 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:28.594 17:30:28 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:28.594 17:30:28 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:03:28.594 17:30:28 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:03:28.594 17:30:28 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:28.594 17:30:28 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:03:28.594 caf0f5d395 version: 22.11.4 00:03:28.594 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:03:28.594 dc9c799c7d vhost: fix missing spinlock unlock 00:03:28.594 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:03:28.594 6ef77f2a5e net/gve: fix RX buffer size alignment 00:03:28.594 17:30:28 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:28.594 17:30:28 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:28.594 17:30:28 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:03:28.594 17:30:28 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:28.594 17:30:28 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:28.594 17:30:28 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:03:28.595 patching file config/rte_config.h 00:03:28.595 Hunk #1 succeeded at 60 (offset 1 line). 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:03:28.595 patching file lib/pcapng/rte_pcapng.c 00:03:28.595 Hunk #1 succeeded at 110 (offset -18 lines). 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:28.595 17:30:28 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:03:28.595 17:30:28 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:33.890 The Meson build system 00:03:33.891 Version: 1.5.0 00:03:33.891 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:03:33.891 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:03:33.891 Build type: native build 00:03:33.891 Program cat found: YES (/usr/bin/cat) 00:03:33.891 Project name: DPDK 00:03:33.891 Project version: 22.11.4 00:03:33.891 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:33.891 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:33.891 Host machine cpu family: x86_64 00:03:33.891 Host machine cpu: x86_64 00:03:33.891 Message: ## Building in Developer Mode ## 00:03:33.891 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:33.891 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:03:33.891 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:03:33.891 Program objdump found: YES (/usr/bin/objdump) 00:03:33.891 Program python3 found: YES (/usr/bin/python3) 00:03:33.891 Program cat found: YES (/usr/bin/cat) 00:03:33.891 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:33.891 Checking for size of "void *" : 8 00:03:33.891 Checking for size of "void *" : 8 (cached) 00:03:33.891 Library m found: YES 00:03:33.891 Library numa found: YES 00:03:33.891 Has header "numaif.h" : YES 00:03:33.891 Library fdt found: NO 00:03:33.891 Library execinfo found: NO 00:03:33.891 Has header "execinfo.h" : YES 00:03:33.891 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:33.891 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:33.891 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:33.891 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:33.891 Run-time dependency openssl found: YES 3.1.1 00:03:33.891 Run-time dependency libpcap found: YES 1.10.4 00:03:33.891 Has header "pcap.h" with dependency libpcap: YES 00:03:33.891 Compiler for C supports arguments -Wcast-qual: YES 00:03:33.891 Compiler for C supports arguments -Wdeprecated: YES 00:03:33.891 Compiler for C supports arguments -Wformat: YES 00:03:33.891 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:33.891 Compiler for C supports arguments -Wformat-security: NO 00:03:33.891 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:33.891 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:33.891 Compiler for C supports arguments -Wnested-externs: YES 00:03:33.891 Compiler for C supports arguments -Wold-style-definition: YES 00:03:33.891 Compiler for C supports arguments -Wpointer-arith: YES 00:03:33.891 Compiler for C supports arguments -Wsign-compare: YES 00:03:33.891 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:33.891 Compiler for C supports arguments -Wundef: YES 00:03:33.891 Compiler for C supports arguments -Wwrite-strings: YES 00:03:33.891 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:33.891 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:33.891 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:33.891 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:33.891 Compiler for C supports arguments -mavx512f: YES 00:03:33.891 Checking if "AVX512 checking" compiles: YES 00:03:33.891 Fetching value of define "__SSE4_2__" : 1 00:03:33.891 Fetching value of define "__AES__" : 1 00:03:33.891 Fetching value of define "__AVX__" : 1 00:03:33.891 Fetching value of define "__AVX2__" : 1 00:03:33.891 Fetching value of define "__AVX512BW__" : 1 00:03:33.891 Fetching value of define "__AVX512CD__" : 1 00:03:33.891 Fetching value of define "__AVX512DQ__" : 1 00:03:33.891 Fetching value of define "__AVX512F__" : 1 00:03:33.891 Fetching value of define "__AVX512VL__" : 1 00:03:33.891 Fetching value of define "__PCLMUL__" : 1 00:03:33.891 Fetching value of define "__RDRND__" : 1 00:03:33.891 Fetching value of define "__RDSEED__" : 1 00:03:33.891 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:33.891 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:33.891 Message: lib/kvargs: Defining dependency "kvargs" 00:03:33.891 Message: lib/telemetry: Defining dependency "telemetry" 00:03:33.891 Checking for function "getentropy" : YES 00:03:33.891 Message: lib/eal: Defining dependency "eal" 00:03:33.891 Message: lib/ring: Defining dependency "ring" 00:03:33.891 Message: lib/rcu: Defining dependency "rcu" 00:03:33.891 Message: lib/mempool: Defining dependency "mempool" 00:03:33.891 Message: lib/mbuf: Defining dependency "mbuf" 00:03:33.891 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:33.891 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:33.891 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:33.891 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:33.891 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:33.891 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:33.891 Compiler for C supports arguments -mpclmul: YES 00:03:33.891 Compiler for C supports arguments -maes: YES 00:03:33.891 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:33.891 Compiler for C supports arguments -mavx512bw: YES 00:03:33.891 Compiler for C supports arguments -mavx512dq: YES 00:03:33.891 Compiler for C supports arguments -mavx512vl: YES 00:03:33.891 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:33.891 Compiler for C supports arguments -mavx2: YES 00:03:33.891 Compiler for C supports arguments -mavx: YES 00:03:33.891 Message: lib/net: Defining dependency "net" 00:03:33.891 Message: lib/meter: Defining dependency "meter" 00:03:33.891 Message: lib/ethdev: Defining dependency "ethdev" 00:03:33.891 Message: lib/pci: Defining dependency "pci" 00:03:33.891 Message: lib/cmdline: Defining dependency "cmdline" 00:03:33.891 Message: lib/metrics: Defining dependency "metrics" 00:03:33.891 Message: lib/hash: Defining dependency "hash" 00:03:33.891 Message: lib/timer: Defining dependency "timer" 00:03:33.891 Fetching value of define "__AVX2__" : 1 (cached) 00:03:33.891 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:33.891 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:33.891 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:33.891 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:33.891 Message: lib/acl: Defining dependency "acl" 00:03:33.891 Message: lib/bbdev: Defining dependency "bbdev" 00:03:33.891 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:33.891 Run-time dependency libelf found: YES 0.191 00:03:33.891 Message: lib/bpf: Defining dependency "bpf" 00:03:33.891 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:33.891 Message: lib/compressdev: Defining dependency "compressdev" 00:03:33.891 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:33.891 Message: lib/distributor: Defining dependency "distributor" 00:03:33.891 Message: lib/efd: Defining dependency "efd" 00:03:33.891 Message: lib/eventdev: Defining dependency "eventdev" 00:03:33.891 Message: lib/gpudev: Defining dependency "gpudev" 00:03:33.891 Message: lib/gro: Defining dependency "gro" 00:03:33.891 Message: lib/gso: Defining dependency "gso" 00:03:33.891 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:33.891 Message: lib/jobstats: Defining dependency "jobstats" 00:03:33.891 Message: lib/latencystats: Defining dependency "latencystats" 00:03:33.891 Message: lib/lpm: Defining dependency "lpm" 00:03:33.891 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:33.891 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:33.891 Fetching value of define "__AVX512IFMA__" : 1 00:03:33.891 Message: lib/member: Defining dependency "member" 00:03:33.891 Message: lib/pcapng: Defining dependency "pcapng" 00:03:33.891 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:33.891 Message: lib/power: Defining dependency "power" 00:03:33.891 Message: lib/rawdev: Defining dependency "rawdev" 00:03:33.891 Message: lib/regexdev: Defining dependency "regexdev" 00:03:33.891 Message: lib/dmadev: Defining dependency "dmadev" 00:03:33.891 Message: lib/rib: Defining dependency "rib" 00:03:33.891 Message: lib/reorder: Defining dependency "reorder" 00:03:33.891 Message: lib/sched: Defining dependency "sched" 00:03:33.891 Message: lib/security: Defining dependency "security" 00:03:33.891 Message: lib/stack: Defining dependency "stack" 00:03:33.891 Has header "linux/userfaultfd.h" : YES 00:03:33.891 Message: lib/vhost: Defining dependency "vhost" 00:03:33.891 Message: lib/ipsec: Defining dependency "ipsec" 00:03:33.891 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:33.891 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:33.891 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:33.891 Message: lib/fib: Defining dependency "fib" 00:03:33.891 Message: lib/port: Defining dependency "port" 00:03:33.891 Message: lib/pdump: Defining dependency "pdump" 00:03:33.891 Message: lib/table: Defining dependency "table" 00:03:33.891 Message: lib/pipeline: Defining dependency "pipeline" 00:03:33.891 Message: lib/graph: Defining dependency "graph" 00:03:33.891 Message: lib/node: Defining dependency "node" 00:03:33.891 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:33.891 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:33.891 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:33.891 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:33.891 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:33.891 Compiler for C supports arguments -Wno-unused-value: YES 00:03:33.891 Compiler for C supports arguments -Wno-format: YES 00:03:33.891 Compiler for C supports arguments -Wno-format-security: YES 00:03:33.891 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:33.891 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:35.282 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:35.282 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:35.282 Fetching value of define "__AVX2__" : 1 (cached) 00:03:35.282 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:35.282 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:35.282 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:35.282 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:35.282 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:35.282 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:35.282 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:35.282 Configuring doxy-api.conf using configuration 00:03:35.282 Program sphinx-build found: NO 00:03:35.282 Configuring rte_build_config.h using configuration 00:03:35.282 Message: 00:03:35.282 ================= 00:03:35.282 Applications Enabled 00:03:35.282 ================= 00:03:35.282 00:03:35.282 apps: 00:03:35.282 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:03:35.282 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:03:35.282 test-security-perf, 00:03:35.282 00:03:35.282 Message: 00:03:35.282 ================= 00:03:35.282 Libraries Enabled 00:03:35.282 ================= 00:03:35.282 00:03:35.282 libs: 00:03:35.282 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:03:35.282 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:03:35.282 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:03:35.282 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:03:35.282 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:03:35.282 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:03:35.282 table, pipeline, graph, node, 00:03:35.282 00:03:35.282 Message: 00:03:35.282 =============== 00:03:35.282 Drivers Enabled 00:03:35.282 =============== 00:03:35.282 00:03:35.282 common: 00:03:35.282 00:03:35.282 bus: 00:03:35.282 pci, vdev, 00:03:35.282 mempool: 00:03:35.282 ring, 00:03:35.282 dma: 00:03:35.282 00:03:35.282 net: 00:03:35.282 i40e, 00:03:35.282 raw: 00:03:35.282 00:03:35.282 crypto: 00:03:35.282 00:03:35.282 compress: 00:03:35.282 00:03:35.282 regex: 00:03:35.282 00:03:35.282 vdpa: 00:03:35.282 00:03:35.282 event: 00:03:35.282 00:03:35.282 baseband: 00:03:35.282 00:03:35.282 gpu: 00:03:35.282 00:03:35.282 00:03:35.282 Message: 00:03:35.282 ================= 00:03:35.282 Content Skipped 00:03:35.282 ================= 00:03:35.282 00:03:35.282 apps: 00:03:35.282 00:03:35.282 libs: 00:03:35.282 kni: explicitly disabled via build config (deprecated lib) 00:03:35.282 flow_classify: explicitly disabled via build config (deprecated lib) 00:03:35.282 00:03:35.282 drivers: 00:03:35.282 common/cpt: not in enabled drivers build config 00:03:35.282 common/dpaax: not in enabled drivers build config 00:03:35.282 common/iavf: not in enabled drivers build config 00:03:35.282 common/idpf: not in enabled drivers build config 00:03:35.282 common/mvep: not in enabled drivers build config 00:03:35.282 common/octeontx: not in enabled drivers build config 00:03:35.282 bus/auxiliary: not in enabled drivers build config 00:03:35.282 bus/dpaa: not in enabled drivers build config 00:03:35.282 bus/fslmc: not in enabled drivers build config 00:03:35.282 bus/ifpga: not in enabled drivers build config 00:03:35.282 bus/vmbus: not in enabled drivers build config 00:03:35.282 common/cnxk: not in enabled drivers build config 00:03:35.282 common/mlx5: not in enabled drivers build config 00:03:35.282 common/qat: not in enabled drivers build config 00:03:35.282 common/sfc_efx: not in enabled drivers build config 00:03:35.282 mempool/bucket: not in enabled drivers build config 00:03:35.282 mempool/cnxk: not in enabled drivers build config 00:03:35.282 mempool/dpaa: not in enabled drivers build config 00:03:35.282 mempool/dpaa2: not in enabled drivers build config 00:03:35.282 mempool/octeontx: not in enabled drivers build config 00:03:35.282 mempool/stack: not in enabled drivers build config 00:03:35.282 dma/cnxk: not in enabled drivers build config 00:03:35.282 dma/dpaa: not in enabled drivers build config 00:03:35.282 dma/dpaa2: not in enabled drivers build config 00:03:35.282 dma/hisilicon: not in enabled drivers build config 00:03:35.282 dma/idxd: not in enabled drivers build config 00:03:35.282 dma/ioat: not in enabled drivers build config 00:03:35.282 dma/skeleton: not in enabled drivers build config 00:03:35.282 net/af_packet: not in enabled drivers build config 00:03:35.282 net/af_xdp: not in enabled drivers build config 00:03:35.282 net/ark: not in enabled drivers build config 00:03:35.282 net/atlantic: not in enabled drivers build config 00:03:35.282 net/avp: not in enabled drivers build config 00:03:35.282 net/axgbe: not in enabled drivers build config 00:03:35.282 net/bnx2x: not in enabled drivers build config 00:03:35.282 net/bnxt: not in enabled drivers build config 00:03:35.282 net/bonding: not in enabled drivers build config 00:03:35.282 net/cnxk: not in enabled drivers build config 00:03:35.282 net/cxgbe: not in enabled drivers build config 00:03:35.282 net/dpaa: not in enabled drivers build config 00:03:35.282 net/dpaa2: not in enabled drivers build config 00:03:35.282 net/e1000: not in enabled drivers build config 00:03:35.282 net/ena: not in enabled drivers build config 00:03:35.282 net/enetc: not in enabled drivers build config 00:03:35.282 net/enetfec: not in enabled drivers build config 00:03:35.282 net/enic: not in enabled drivers build config 00:03:35.282 net/failsafe: not in enabled drivers build config 00:03:35.282 net/fm10k: not in enabled drivers build config 00:03:35.282 net/gve: not in enabled drivers build config 00:03:35.282 net/hinic: not in enabled drivers build config 00:03:35.282 net/hns3: not in enabled drivers build config 00:03:35.282 net/iavf: not in enabled drivers build config 00:03:35.282 net/ice: not in enabled drivers build config 00:03:35.282 net/idpf: not in enabled drivers build config 00:03:35.282 net/igc: not in enabled drivers build config 00:03:35.282 net/ionic: not in enabled drivers build config 00:03:35.282 net/ipn3ke: not in enabled drivers build config 00:03:35.282 net/ixgbe: not in enabled drivers build config 00:03:35.282 net/kni: not in enabled drivers build config 00:03:35.282 net/liquidio: not in enabled drivers build config 00:03:35.282 net/mana: not in enabled drivers build config 00:03:35.282 net/memif: not in enabled drivers build config 00:03:35.282 net/mlx4: not in enabled drivers build config 00:03:35.282 net/mlx5: not in enabled drivers build config 00:03:35.282 net/mvneta: not in enabled drivers build config 00:03:35.282 net/mvpp2: not in enabled drivers build config 00:03:35.282 net/netvsc: not in enabled drivers build config 00:03:35.282 net/nfb: not in enabled drivers build config 00:03:35.282 net/nfp: not in enabled drivers build config 00:03:35.282 net/ngbe: not in enabled drivers build config 00:03:35.282 net/null: not in enabled drivers build config 00:03:35.282 net/octeontx: not in enabled drivers build config 00:03:35.282 net/octeon_ep: not in enabled drivers build config 00:03:35.282 net/pcap: not in enabled drivers build config 00:03:35.282 net/pfe: not in enabled drivers build config 00:03:35.282 net/qede: not in enabled drivers build config 00:03:35.282 net/ring: not in enabled drivers build config 00:03:35.282 net/sfc: not in enabled drivers build config 00:03:35.283 net/softnic: not in enabled drivers build config 00:03:35.283 net/tap: not in enabled drivers build config 00:03:35.283 net/thunderx: not in enabled drivers build config 00:03:35.283 net/txgbe: not in enabled drivers build config 00:03:35.283 net/vdev_netvsc: not in enabled drivers build config 00:03:35.283 net/vhost: not in enabled drivers build config 00:03:35.283 net/virtio: not in enabled drivers build config 00:03:35.283 net/vmxnet3: not in enabled drivers build config 00:03:35.283 raw/cnxk_bphy: not in enabled drivers build config 00:03:35.283 raw/cnxk_gpio: not in enabled drivers build config 00:03:35.283 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:35.283 raw/ifpga: not in enabled drivers build config 00:03:35.283 raw/ntb: not in enabled drivers build config 00:03:35.283 raw/skeleton: not in enabled drivers build config 00:03:35.283 crypto/armv8: not in enabled drivers build config 00:03:35.283 crypto/bcmfs: not in enabled drivers build config 00:03:35.283 crypto/caam_jr: not in enabled drivers build config 00:03:35.283 crypto/ccp: not in enabled drivers build config 00:03:35.283 crypto/cnxk: not in enabled drivers build config 00:03:35.283 crypto/dpaa_sec: not in enabled drivers build config 00:03:35.283 crypto/dpaa2_sec: not in enabled drivers build config 00:03:35.283 crypto/ipsec_mb: not in enabled drivers build config 00:03:35.283 crypto/mlx5: not in enabled drivers build config 00:03:35.283 crypto/mvsam: not in enabled drivers build config 00:03:35.283 crypto/nitrox: not in enabled drivers build config 00:03:35.283 crypto/null: not in enabled drivers build config 00:03:35.283 crypto/octeontx: not in enabled drivers build config 00:03:35.283 crypto/openssl: not in enabled drivers build config 00:03:35.283 crypto/scheduler: not in enabled drivers build config 00:03:35.283 crypto/uadk: not in enabled drivers build config 00:03:35.283 crypto/virtio: not in enabled drivers build config 00:03:35.283 compress/isal: not in enabled drivers build config 00:03:35.283 compress/mlx5: not in enabled drivers build config 00:03:35.283 compress/octeontx: not in enabled drivers build config 00:03:35.283 compress/zlib: not in enabled drivers build config 00:03:35.283 regex/mlx5: not in enabled drivers build config 00:03:35.283 regex/cn9k: not in enabled drivers build config 00:03:35.283 vdpa/ifc: not in enabled drivers build config 00:03:35.283 vdpa/mlx5: not in enabled drivers build config 00:03:35.283 vdpa/sfc: not in enabled drivers build config 00:03:35.283 event/cnxk: not in enabled drivers build config 00:03:35.283 event/dlb2: not in enabled drivers build config 00:03:35.283 event/dpaa: not in enabled drivers build config 00:03:35.283 event/dpaa2: not in enabled drivers build config 00:03:35.283 event/dsw: not in enabled drivers build config 00:03:35.283 event/opdl: not in enabled drivers build config 00:03:35.283 event/skeleton: not in enabled drivers build config 00:03:35.283 event/sw: not in enabled drivers build config 00:03:35.283 event/octeontx: not in enabled drivers build config 00:03:35.283 baseband/acc: not in enabled drivers build config 00:03:35.283 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:35.283 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:35.283 baseband/la12xx: not in enabled drivers build config 00:03:35.283 baseband/null: not in enabled drivers build config 00:03:35.283 baseband/turbo_sw: not in enabled drivers build config 00:03:35.283 gpu/cuda: not in enabled drivers build config 00:03:35.283 00:03:35.283 00:03:35.283 Build targets in project: 309 00:03:35.283 00:03:35.283 DPDK 22.11.4 00:03:35.283 00:03:35.283 User defined options 00:03:35.283 libdir : lib 00:03:35.283 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:35.283 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:35.283 c_link_args : 00:03:35.283 enable_docs : false 00:03:35.283 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:35.283 enable_kmods : false 00:03:35.283 machine : native 00:03:35.283 tests : false 00:03:35.283 00:03:35.283 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:35.283 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:35.283 17:30:35 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:03:35.283 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:35.556 [1/738] Generating lib/rte_kvargs_def with a custom command 00:03:35.556 [2/738] Generating lib/rte_kvargs_mingw with a custom command 00:03:35.556 [3/738] Generating lib/rte_telemetry_def with a custom command 00:03:35.556 [4/738] Generating lib/rte_telemetry_mingw with a custom command 00:03:35.556 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:35.556 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:35.556 [7/738] Generating lib/rte_eal_def with a custom command 00:03:35.556 [8/738] Generating lib/rte_eal_mingw with a custom command 00:03:35.556 [9/738] Generating lib/rte_rcu_def with a custom command 00:03:35.556 [10/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:35.556 [11/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:35.556 [12/738] Generating lib/rte_mbuf_def with a custom command 00:03:35.556 [13/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:35.556 [14/738] Generating lib/rte_mempool_def with a custom command 00:03:35.556 [15/738] Generating lib/rte_ring_mingw with a custom command 00:03:35.556 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:35.556 [17/738] Generating lib/rte_mempool_mingw with a custom command 00:03:35.556 [18/738] Generating lib/rte_ring_def with a custom command 00:03:35.556 [19/738] Generating lib/rte_net_def with a custom command 00:03:35.556 [20/738] Generating lib/rte_meter_def with a custom command 00:03:35.556 [21/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:35.556 [22/738] Generating lib/rte_net_mingw with a custom command 00:03:35.556 [23/738] Generating lib/rte_pci_def with a custom command 00:03:35.556 [24/738] Generating lib/rte_rcu_mingw with a custom command 00:03:35.556 [25/738] Generating lib/rte_mbuf_mingw with a custom command 00:03:35.556 [26/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:35.556 [27/738] Generating lib/rte_ethdev_def with a custom command 00:03:35.556 [28/738] Generating lib/rte_meter_mingw with a custom command 00:03:35.556 [29/738] Generating lib/rte_ethdev_mingw with a custom command 00:03:35.556 [30/738] Generating lib/rte_pci_mingw with a custom command 00:03:35.556 [31/738] Generating lib/rte_cmdline_mingw with a custom command 00:03:35.556 [32/738] Generating lib/rte_cmdline_def with a custom command 00:03:35.556 [33/738] Generating lib/rte_metrics_def with a custom command 00:03:35.556 [34/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:35.556 [35/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:35.556 [36/738] Generating lib/rte_timer_def with a custom command 00:03:35.556 [37/738] Generating lib/rte_hash_def with a custom command 00:03:35.557 [38/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:35.557 [39/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:35.557 [40/738] Generating lib/rte_metrics_mingw with a custom command 00:03:35.557 [41/738] Generating lib/rte_hash_mingw with a custom command 00:03:35.557 [42/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:35.557 [43/738] Generating lib/rte_timer_mingw with a custom command 00:03:35.557 [44/738] Generating lib/rte_bitratestats_def with a custom command 00:03:35.557 [45/738] Generating lib/rte_bbdev_def with a custom command 00:03:35.557 [46/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:35.830 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:35.830 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:35.830 [49/738] Linking static target lib/librte_kvargs.a 00:03:35.830 [50/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:35.830 [51/738] Generating lib/rte_acl_mingw with a custom command 00:03:35.830 [52/738] Generating lib/rte_acl_def with a custom command 00:03:35.830 [53/738] Generating lib/rte_bbdev_mingw with a custom command 00:03:35.830 [54/738] Generating lib/rte_bitratestats_mingw with a custom command 00:03:35.830 [55/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:03:35.830 [56/738] Generating lib/rte_bpf_def with a custom command 00:03:35.830 [57/738] Generating lib/rte_cfgfile_def with a custom command 00:03:35.830 [58/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:35.830 [59/738] Generating lib/rte_cfgfile_mingw with a custom command 00:03:35.830 [60/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:35.830 [61/738] Generating lib/rte_bpf_mingw with a custom command 00:03:35.830 [62/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:35.830 [63/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:35.830 [64/738] Generating lib/rte_cryptodev_def with a custom command 00:03:35.830 [65/738] Generating lib/rte_compressdev_mingw with a custom command 00:03:35.830 [66/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:35.830 [67/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:35.830 [68/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:35.830 [69/738] Generating lib/rte_compressdev_def with a custom command 00:03:35.830 [70/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:35.830 [71/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:35.830 [72/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:35.830 [73/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:35.830 [74/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:35.831 [75/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:35.831 [76/738] Generating lib/rte_cryptodev_mingw with a custom command 00:03:35.831 [77/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:35.831 [78/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:35.831 [79/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:35.831 [80/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:35.831 [81/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:35.831 [82/738] Generating lib/rte_distributor_mingw with a custom command 00:03:35.831 [83/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:35.831 [84/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:35.831 [85/738] Generating lib/rte_efd_def with a custom command 00:03:35.831 [86/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:35.831 [87/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:35.831 [88/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:35.831 [89/738] Generating lib/rte_efd_mingw with a custom command 00:03:35.831 [90/738] Generating lib/rte_distributor_def with a custom command 00:03:35.831 [91/738] Linking static target lib/librte_pci.a 00:03:35.831 [92/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:35.831 [93/738] Generating lib/rte_eventdev_mingw with a custom command 00:03:35.831 [94/738] Generating lib/rte_gpudev_mingw with a custom command 00:03:36.090 [95/738] Generating lib/rte_gpudev_def with a custom command 00:03:36.090 [96/738] Generating lib/rte_eventdev_def with a custom command 00:03:36.090 [97/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:36.090 [98/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:36.090 [99/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:36.090 [100/738] Generating lib/rte_gro_def with a custom command 00:03:36.090 [101/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:36.090 [102/738] Generating lib/rte_gro_mingw with a custom command 00:03:36.090 [103/738] Generating lib/rte_gso_def with a custom command 00:03:36.090 [104/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:36.090 [105/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:36.090 [106/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:03:36.090 [107/738] Generating lib/rte_gso_mingw with a custom command 00:03:36.090 [108/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:36.090 [109/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:36.090 [110/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:36.090 [111/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:36.090 [112/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:36.090 [113/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:36.090 [114/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:36.090 [115/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:36.090 [116/738] Generating lib/rte_ip_frag_mingw with a custom command 00:03:36.090 [117/738] Generating lib/rte_jobstats_mingw with a custom command 00:03:36.090 [118/738] Generating lib/rte_ip_frag_def with a custom command 00:03:36.090 [119/738] Linking static target lib/librte_ring.a 00:03:36.090 [120/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:36.090 [121/738] Generating lib/rte_jobstats_def with a custom command 00:03:36.090 [122/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:36.090 [123/738] Generating lib/rte_latencystats_mingw with a custom command 00:03:36.090 [124/738] Generating lib/rte_latencystats_def with a custom command 00:03:36.090 [125/738] Generating lib/rte_lpm_def with a custom command 00:03:36.090 [126/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:36.091 [127/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:36.091 [128/738] Generating lib/rte_lpm_mingw with a custom command 00:03:36.091 [129/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:36.091 [130/738] Generating lib/rte_member_def with a custom command 00:03:36.091 [131/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:36.091 [132/738] Generating lib/rte_member_mingw with a custom command 00:03:36.091 [133/738] Generating lib/rte_pcapng_mingw with a custom command 00:03:36.091 [134/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:36.091 [135/738] Linking static target lib/librte_meter.a 00:03:36.091 [136/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:36.091 [137/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:36.091 [138/738] Generating lib/rte_pcapng_def with a custom command 00:03:36.359 [139/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:36.359 [140/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:36.359 [141/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:36.359 [142/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:36.359 [143/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:36.359 [144/738] Generating lib/rte_rawdev_mingw with a custom command 00:03:36.359 [145/738] Generating lib/rte_power_def with a custom command 00:03:36.359 [146/738] Generating lib/rte_power_mingw with a custom command 00:03:36.359 [147/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:36.359 [148/738] Generating lib/rte_regexdev_def with a custom command 00:03:36.359 [149/738] Generating lib/rte_rawdev_def with a custom command 00:03:36.359 [150/738] Generating lib/rte_dmadev_mingw with a custom command 00:03:36.359 [151/738] Generating lib/rte_regexdev_mingw with a custom command 00:03:36.359 [152/738] Generating lib/rte_dmadev_def with a custom command 00:03:36.359 [153/738] Generating lib/rte_rib_def with a custom command 00:03:36.359 [154/738] Generating lib/rte_rib_mingw with a custom command 00:03:36.359 [155/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:36.359 [156/738] Generating lib/rte_reorder_mingw with a custom command 00:03:36.359 [157/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:36.359 [158/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:36.359 [159/738] Generating lib/rte_reorder_def with a custom command 00:03:36.618 [160/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:36.618 [161/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:36.618 [162/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:36.618 [163/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:36.618 [164/738] Generating lib/rte_sched_def with a custom command 00:03:36.618 [165/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:36.618 [166/738] Generating lib/rte_sched_mingw with a custom command 00:03:36.618 [167/738] Generating lib/rte_security_def with a custom command 00:03:36.618 [168/738] Linking static target lib/librte_cfgfile.a 00:03:36.618 [169/738] Generating lib/rte_security_mingw with a custom command 00:03:36.618 [170/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:36.618 [171/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:36.618 [172/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:36.618 [173/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:36.618 [174/738] Linking static target lib/librte_jobstats.a 00:03:36.618 [175/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:36.618 [176/738] Generating lib/rte_stack_def with a custom command 00:03:36.618 [177/738] Generating lib/rte_stack_mingw with a custom command 00:03:36.618 [178/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:36.618 [179/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:36.618 [180/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.618 [181/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:36.618 [182/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:36.618 [183/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.618 [184/738] Linking static target lib/librte_telemetry.a 00:03:36.618 [185/738] Generating lib/rte_vhost_def with a custom command 00:03:36.618 [186/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:36.618 [187/738] Generating lib/rte_vhost_mingw with a custom command 00:03:36.618 [188/738] Linking target lib/librte_kvargs.so.23.0 00:03:36.618 [189/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:36.618 [190/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:36.618 [191/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:36.618 [192/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:36.618 [193/738] Linking static target lib/librte_timer.a 00:03:36.618 [194/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:36.618 [195/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:36.618 [196/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:36.618 [197/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:36.883 [198/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:36.883 [199/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:36.883 [200/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:36.883 [201/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:36.883 [202/738] Linking static target lib/librte_stack.a 00:03:36.883 [203/738] Generating lib/rte_ipsec_def with a custom command 00:03:36.883 [204/738] Generating lib/rte_ipsec_mingw with a custom command 00:03:36.883 [205/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:36.883 [206/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:36.883 [207/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:36.883 [208/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:36.883 [209/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.883 [210/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:36.883 [211/738] Linking static target lib/librte_metrics.a 00:03:36.883 [212/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:36.883 [213/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.883 [214/738] Generating lib/rte_fib_def with a custom command 00:03:36.883 [215/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:36.883 [216/738] Generating lib/rte_fib_mingw with a custom command 00:03:36.883 [217/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:36.883 [218/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:36.883 [219/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:36.883 [220/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:36.883 [221/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:36.883 [222/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:36.883 [223/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:36.883 [224/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:36.883 [225/738] Linking static target lib/librte_cmdline.a 00:03:36.883 [226/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:36.883 [227/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:36.883 [228/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:36.883 [229/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:36.883 [230/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:36.883 [231/738] Generating lib/rte_port_def with a custom command 00:03:36.883 [232/738] Generating lib/rte_port_mingw with a custom command 00:03:36.883 [233/738] Generating lib/rte_pdump_mingw with a custom command 00:03:36.883 [234/738] Generating lib/rte_pdump_def with a custom command 00:03:36.883 [235/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:36.883 [236/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:03:36.883 [237/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:36.883 [238/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:36.883 [239/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:36.883 [240/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:36.883 [241/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:36.883 [242/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:36.883 [243/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:36.883 [244/738] Linking static target lib/librte_bitratestats.a 00:03:36.883 [245/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:36.883 [246/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:36.883 [247/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:36.883 [248/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:36.883 [249/738] Linking static target lib/librte_dmadev.a 00:03:37.143 [250/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:37.143 [251/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:37.143 [252/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:37.143 [253/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:37.143 [254/738] Linking static target lib/librte_rawdev.a 00:03:37.143 [255/738] Generating lib/rte_table_def with a custom command 00:03:37.143 [256/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:37.143 [257/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:37.143 [258/738] Generating lib/rte_table_mingw with a custom command 00:03:37.143 [259/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:37.143 [260/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:37.143 [261/738] Linking static target lib/librte_net.a 00:03:37.143 [262/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:37.143 [263/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:37.143 [264/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:37.143 [265/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:37.143 [266/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.143 [267/738] Generating lib/rte_pipeline_def with a custom command 00:03:37.143 [268/738] Generating lib/rte_pipeline_mingw with a custom command 00:03:37.143 [269/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:37.143 [270/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.143 [271/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.143 [272/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:37.143 [273/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:37.143 [274/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:37.143 [275/738] Generating lib/rte_graph_def with a custom command 00:03:37.143 [276/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:37.143 [277/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:37.143 [278/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:37.143 [279/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:37.143 [280/738] Generating lib/rte_graph_mingw with a custom command 00:03:37.143 [281/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:37.143 [282/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:37.143 [283/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:37.143 [284/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:37.143 [285/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:37.143 [286/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:37.143 [287/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:37.143 [288/738] Generating lib/rte_node_def with a custom command 00:03:37.143 [289/738] Generating lib/rte_node_mingw with a custom command 00:03:37.143 [290/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:37.143 [291/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:37.143 [292/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:37.143 [293/738] Linking static target lib/librte_compressdev.a 00:03:37.143 [294/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:37.143 [295/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:37.143 [296/738] Generating drivers/rte_bus_pci_def with a custom command 00:03:37.143 [297/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:37.143 [298/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:37.143 [299/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:37.143 [300/738] Generating drivers/rte_bus_vdev_def with a custom command 00:03:37.143 [301/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:37.143 [302/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:37.143 [303/738] Generating drivers/rte_mempool_ring_def with a custom command 00:03:37.143 [304/738] Linking static target lib/librte_mempool.a 00:03:37.143 [305/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:37.143 [306/738] Linking static target lib/librte_rcu.a 00:03:37.144 [307/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:37.144 [308/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:37.144 [309/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:37.144 [310/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:37.144 [311/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:37.403 [312/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:37.403 [313/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.403 [314/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:37.403 [315/738] Linking static target lib/librte_latencystats.a 00:03:37.403 [316/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.403 [317/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:37.403 [318/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:37.403 [319/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:37.403 [320/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:37.403 [321/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:37.403 [322/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.403 [323/738] Linking target lib/librte_telemetry.so.23.0 00:03:37.403 [324/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:37.403 [325/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:37.403 [326/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:37.403 [327/738] Linking static target lib/librte_regexdev.a 00:03:37.403 [328/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:37.403 [329/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:37.403 [330/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:37.403 [331/738] Linking static target lib/librte_gpudev.a 00:03:37.403 [332/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:37.403 [333/738] Linking static target lib/librte_power.a 00:03:37.403 [334/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:37.403 [335/738] Generating drivers/rte_net_i40e_def with a custom command 00:03:37.403 [336/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:37.403 [337/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:37.403 [338/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:37.403 [339/738] Linking static target lib/librte_gro.a 00:03:37.403 [340/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:37.403 [341/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.403 [342/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:37.403 [343/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:37.403 [344/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:37.403 [345/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.403 [346/738] Linking static target lib/librte_security.a 00:03:37.403 [347/738] Linking static target lib/librte_bbdev.a 00:03:37.403 [348/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:37.403 [349/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:37.403 [350/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:37.403 [351/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:37.403 [352/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:37.403 [353/738] Linking static target lib/librte_gso.a 00:03:37.403 [354/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:37.404 [355/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:37.404 [356/738] Linking static target lib/librte_reorder.a 00:03:37.404 [357/738] Linking static target lib/librte_distributor.a 00:03:37.404 [358/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:37.404 [359/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:37.404 [360/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:37.404 [361/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:37.404 [362/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:37.404 [363/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:37.404 [364/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:37.404 [365/738] Linking static target lib/librte_ip_frag.a 00:03:37.404 [366/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:37.404 [367/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:37.404 [368/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:37.404 [369/738] Linking static target lib/librte_pcapng.a 00:03:37.404 [370/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:03:37.665 [371/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:37.665 [372/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:37.665 [373/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:37.665 [374/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:37.665 [375/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:37.665 [376/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.665 [377/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:37.665 [378/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:37.665 [379/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:37.665 [380/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:37.665 [381/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:37.665 [382/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:37.665 [383/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:37.665 [384/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:37.665 [385/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:37.665 [386/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:37.665 [387/738] Linking static target lib/librte_eal.a 00:03:37.665 [388/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:37.665 [389/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:37.665 [390/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:37.665 [391/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.665 [392/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:37.665 [393/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:37.665 [394/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.665 [395/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:37.665 [396/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:37.665 [397/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.665 [398/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:37.665 [399/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:37.665 [400/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:37.665 [401/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.665 [402/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:37.665 [403/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:37.665 [404/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:37.665 [405/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:37.665 [406/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:37.665 [407/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:37.665 [408/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:37.924 [409/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:37.924 [410/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:37.924 [411/738] Linking static target lib/librte_graph.a 00:03:37.924 [412/738] Linking static target lib/librte_rib.a 00:03:37.924 [413/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:37.924 [414/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:37.924 [415/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.924 [416/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:37.924 [417/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:37.924 [418/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:37.924 [419/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:37.924 [420/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:37.924 [421/738] Linking static target lib/librte_mbuf.a 00:03:37.924 [422/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:37.924 [423/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:37.924 [424/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:37.924 [425/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.924 [426/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:37.924 [427/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:37.924 [428/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:37.924 [429/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:37.924 [430/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:37.924 [431/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.925 [432/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:37.925 [433/738] Linking static target drivers/librte_bus_vdev.a 00:03:37.925 [434/738] Linking static target lib/librte_lpm.a 00:03:37.925 [435/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:37.925 [436/738] Linking static target lib/librte_bpf.a 00:03:37.925 [437/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:37.925 [438/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:37.925 [439/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:37.925 [440/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:37.925 [441/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:37.925 [442/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.925 [443/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.925 [444/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:38.185 [445/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:38.185 [446/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:38.185 [447/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:38.185 [448/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:38.185 [449/738] Linking static target lib/librte_efd.a 00:03:38.185 [450/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:38.185 [451/738] Linking static target lib/librte_fib.a 00:03:38.185 [452/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:38.185 [453/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:38.185 [454/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:38.185 [455/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.185 [456/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:38.185 [457/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:38.185 [458/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:38.185 [459/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:38.185 [460/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:38.185 [461/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:38.185 [462/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:38.185 [463/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:38.185 [464/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:38.185 [465/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:38.185 [466/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:38.185 [467/738] Linking static target drivers/librte_bus_pci.a 00:03:38.185 [468/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:38.185 [469/738] Linking static target lib/librte_pdump.a 00:03:38.185 [470/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:38.185 [471/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:38.185 [472/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.185 [473/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:38.185 [474/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.445 [475/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.445 [476/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:38.445 [477/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.445 [478/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:38.445 [479/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:38.445 [480/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:38.445 [481/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:38.445 [482/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:38.446 [483/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:38.446 [484/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:38.446 [485/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.446 [486/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.446 [487/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.446 [488/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:38.446 [489/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:38.446 [490/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.446 [491/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:38.446 [492/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:38.446 [493/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:38.446 [494/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.446 [495/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:38.446 [496/738] Linking static target lib/librte_table.a 00:03:38.446 [497/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:38.446 [498/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:38.446 [499/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:38.446 [500/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:38.446 [501/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:38.446 [502/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:38.446 [503/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.446 [504/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:38.446 [505/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:38.446 [506/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:38.446 [507/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:38.446 [508/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.446 [509/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:38.446 [510/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:38.446 [511/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.446 [512/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:38.446 [513/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:38.446 [514/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:38.446 [515/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:38.446 [516/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.446 [517/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:38.446 [518/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:38.446 [519/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:38.446 [520/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:38.446 [521/738] Linking static target lib/librte_node.a 00:03:38.446 [522/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:38.446 [523/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.446 [524/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:38.446 [525/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:38.446 [526/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:38.707 [527/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:38.707 [528/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:38.707 [529/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.707 [530/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:38.707 [531/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:38.707 [532/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.707 [533/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:38.707 [534/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:38.707 [535/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:38.707 [536/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:38.707 [537/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:38.707 [538/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:38.707 [539/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:38.707 [540/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:38.707 [541/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:38.707 [542/738] Linking static target lib/librte_sched.a 00:03:38.707 [543/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:38.707 [544/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:38.707 [545/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:38.707 [546/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:38.707 [547/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:38.707 [548/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:38.707 [549/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:38.707 [550/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:38.707 [551/738] Linking static target drivers/librte_mempool_ring.a 00:03:38.707 [552/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:38.707 [553/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:38.707 [554/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:38.707 [555/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:38.707 [556/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:38.707 [557/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:38.707 [558/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:38.707 [559/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:38.707 [560/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:38.707 [561/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:38.707 [562/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:38.707 [563/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:38.707 [564/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.707 [565/738] Linking static target lib/librte_ipsec.a 00:03:38.968 [566/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:38.968 [567/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:38.968 [568/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:38.968 [569/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:38.968 [570/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:38.968 [571/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:38.968 [572/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:38.968 [573/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:38.968 [574/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.968 [575/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:38.968 [576/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:38.968 [577/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:38.968 [578/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:38.968 [579/738] Linking static target lib/librte_port.a 00:03:38.968 [580/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:38.968 [581/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:38.968 [582/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:38.968 [583/738] Linking static target lib/librte_cryptodev.a 00:03:38.968 [584/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:38.969 [585/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:38.969 [586/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:38.969 [587/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:38.969 [588/738] Linking static target lib/librte_member.a 00:03:38.969 [589/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:39.230 [590/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:39.230 [591/738] Linking static target lib/librte_ethdev.a 00:03:39.230 [592/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:39.230 [593/738] Linking static target lib/librte_hash.a 00:03:39.230 [594/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:39.230 [595/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:39.230 [596/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:39.230 [597/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:39.230 [598/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:39.230 [599/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:39.230 [600/738] Linking static target lib/librte_eventdev.a 00:03:39.230 [601/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:39.230 [602/738] Linking static target lib/librte_acl.a 00:03:39.231 [603/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.231 [604/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.231 [605/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:39.490 [606/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:39.490 [607/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:39.490 [608/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.490 [609/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:39.490 [610/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.752 [611/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:39.752 [612/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:39.752 [613/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:39.752 [614/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.752 [615/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.013 [616/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:40.274 [617/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.274 [618/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:40.537 [619/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:40.799 [620/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:40.799 [621/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:41.061 [622/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:41.061 [623/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:41.061 [624/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:41.323 [625/738] Linking static target drivers/librte_net_i40e.a 00:03:41.323 [626/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:41.584 [627/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:42.156 [628/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:42.156 [629/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.729 [630/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.729 [631/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.938 [632/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:46.938 [633/738] Linking static target lib/librte_pipeline.a 00:03:46.938 [634/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:46.938 [635/738] Linking static target lib/librte_vhost.a 00:03:47.199 [636/738] Linking target app/dpdk-pdump 00:03:47.199 [637/738] Linking target app/dpdk-test-fib 00:03:47.199 [638/738] Linking target app/dpdk-test-crypto-perf 00:03:47.199 [639/738] Linking target app/dpdk-test-gpudev 00:03:47.199 [640/738] Linking target app/dpdk-test-sad 00:03:47.199 [641/738] Linking target app/dpdk-test-flow-perf 00:03:47.199 [642/738] Linking target app/dpdk-test-security-perf 00:03:47.199 [643/738] Linking target app/dpdk-test-eventdev 00:03:47.199 [644/738] Linking target app/dpdk-test-regex 00:03:47.199 [645/738] Linking target app/dpdk-dumpcap 00:03:47.199 [646/738] Linking target app/dpdk-test-acl 00:03:47.199 [647/738] Linking target app/dpdk-proc-info 00:03:47.199 [648/738] Linking target app/dpdk-test-cmdline 00:03:47.199 [649/738] Linking target app/dpdk-test-bbdev 00:03:47.199 [650/738] Linking target app/dpdk-test-compress-perf 00:03:47.199 [651/738] Linking target app/dpdk-test-pipeline 00:03:47.199 [652/738] Linking target app/dpdk-testpmd 00:03:47.462 [653/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.957 [654/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.872 [655/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.872 [656/738] Linking target lib/librte_eal.so.23.0 00:03:50.872 [657/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:50.872 [658/738] Linking target lib/librte_meter.so.23.0 00:03:50.872 [659/738] Linking target lib/librte_ring.so.23.0 00:03:50.872 [660/738] Linking target lib/librte_timer.so.23.0 00:03:50.872 [661/738] Linking target lib/librte_pci.so.23.0 00:03:50.872 [662/738] Linking target lib/librte_cfgfile.so.23.0 00:03:50.872 [663/738] Linking target lib/librte_jobstats.so.23.0 00:03:50.872 [664/738] Linking target lib/librte_stack.so.23.0 00:03:50.872 [665/738] Linking target lib/librte_dmadev.so.23.0 00:03:50.872 [666/738] Linking target lib/librte_rawdev.so.23.0 00:03:50.872 [667/738] Linking target lib/librte_graph.so.23.0 00:03:50.872 [668/738] Linking target drivers/librte_bus_vdev.so.23.0 00:03:50.872 [669/738] Linking target lib/librte_acl.so.23.0 00:03:50.872 [670/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:50.872 [671/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:50.872 [672/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:50.872 [673/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:50.872 [674/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:50.872 [675/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:50.872 [676/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:50.872 [677/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:51.134 [678/738] Linking target drivers/librte_bus_pci.so.23.0 00:03:51.134 [679/738] Linking target lib/librte_rcu.so.23.0 00:03:51.134 [680/738] Linking target lib/librte_mempool.so.23.0 00:03:51.134 [681/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:51.134 [682/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:51.134 [683/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:51.134 [684/738] Linking target drivers/librte_mempool_ring.so.23.0 00:03:51.134 [685/738] Linking target lib/librte_rib.so.23.0 00:03:51.134 [686/738] Linking target lib/librte_mbuf.so.23.0 00:03:51.396 [687/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:51.396 [688/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:51.396 [689/738] Linking target lib/librte_fib.so.23.0 00:03:51.396 [690/738] Linking target lib/librte_bbdev.so.23.0 00:03:51.396 [691/738] Linking target lib/librte_compressdev.so.23.0 00:03:51.396 [692/738] Linking target lib/librte_net.so.23.0 00:03:51.396 [693/738] Linking target lib/librte_gpudev.so.23.0 00:03:51.396 [694/738] Linking target lib/librte_distributor.so.23.0 00:03:51.396 [695/738] Linking target lib/librte_reorder.so.23.0 00:03:51.396 [696/738] Linking target lib/librte_regexdev.so.23.0 00:03:51.396 [697/738] Linking target lib/librte_sched.so.23.0 00:03:51.396 [698/738] Linking target lib/librte_cryptodev.so.23.0 00:03:51.659 [699/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:51.659 [700/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:51.659 [701/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:51.659 [702/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.659 [703/738] Linking target lib/librte_hash.so.23.0 00:03:51.659 [704/738] Linking target lib/librte_security.so.23.0 00:03:51.659 [705/738] Linking target lib/librte_cmdline.so.23.0 00:03:51.659 [706/738] Linking target lib/librte_ethdev.so.23.0 00:03:51.920 [707/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:51.920 [708/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:51.920 [709/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:51.920 [710/738] Linking target lib/librte_efd.so.23.0 00:03:51.920 [711/738] Linking target lib/librte_lpm.so.23.0 00:03:51.920 [712/738] Linking target lib/librte_member.so.23.0 00:03:51.920 [713/738] Linking target lib/librte_ipsec.so.23.0 00:03:51.920 [714/738] Linking target lib/librte_metrics.so.23.0 00:03:51.920 [715/738] Linking target lib/librte_gso.so.23.0 00:03:51.920 [716/738] Linking target lib/librte_ip_frag.so.23.0 00:03:51.920 [717/738] Linking target lib/librte_gro.so.23.0 00:03:51.920 [718/738] Linking target lib/librte_pcapng.so.23.0 00:03:51.920 [719/738] Linking target lib/librte_bpf.so.23.0 00:03:51.920 [720/738] Linking target lib/librte_power.so.23.0 00:03:51.920 [721/738] Linking target lib/librte_eventdev.so.23.0 00:03:51.920 [722/738] Linking target lib/librte_vhost.so.23.0 00:03:51.920 [723/738] Linking target drivers/librte_net_i40e.so.23.0 00:03:51.920 [724/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:52.181 [725/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:52.181 [726/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:52.181 [727/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:52.181 [728/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:52.181 [729/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:52.181 [730/738] Linking target lib/librte_node.so.23.0 00:03:52.181 [731/738] Linking target lib/librte_bitratestats.so.23.0 00:03:52.181 [732/738] Linking target lib/librte_latencystats.so.23.0 00:03:52.181 [733/738] Linking target lib/librte_port.so.23.0 00:03:52.181 [734/738] Linking target lib/librte_pdump.so.23.0 00:03:52.181 [735/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:52.443 [736/738] Linking target lib/librte_table.so.23.0 00:03:52.443 [737/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:52.704 [738/738] Linking target lib/librte_pipeline.so.23.0 00:03:52.704 17:30:52 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:52.704 17:30:52 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:52.704 17:30:52 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:03:52.704 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:52.704 [0/1] Installing files. 00:03:52.970 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:52.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:52.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.973 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.974 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.975 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:52.976 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:52.976 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.976 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:52.977 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:53.243 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:53.243 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:53.243 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:53.243 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:53.243 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:53.243 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:53.243 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:53.243 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:53.243 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:53.243 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:53.243 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:53.243 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:53.243 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:53.243 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:53.243 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:53.243 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:53.243 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.243 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.243 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.243 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.243 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.243 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.244 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.244 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.244 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.244 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.244 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.244 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.244 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.244 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.244 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.244 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.244 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.244 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.245 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.246 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:53.247 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:53.247 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:53.247 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:53.247 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:53.247 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:53.247 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:53.247 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:53.247 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:53.247 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:53.247 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:53.247 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:53.247 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:53.247 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:53.247 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:53.247 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:53.247 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:53.247 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:53.247 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:53.247 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:53.247 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:53.248 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:53.248 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:53.248 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:53.248 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:53.248 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:53.248 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:53.248 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:53.248 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:53.248 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:53.248 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:53.248 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:53.248 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:53.248 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:53.248 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:53.248 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:53.248 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:53.248 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:53.248 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:53.248 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:53.248 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:53.248 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:53.248 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:53.248 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:53.248 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:53.248 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:53.248 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:53.248 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:53.248 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:53.248 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:53.248 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:53.248 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:53.248 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:53.248 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:53.248 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:53.248 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:53.248 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:53.248 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:53.248 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:53.248 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:53.248 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:53.248 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:53.248 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:53.248 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:53.248 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:53.248 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:53.248 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:53.248 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:53.248 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:53.248 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:53.248 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:53.248 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:53.248 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:53.248 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:53.248 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:53.248 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:53.248 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:53.248 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:53.248 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:53.248 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:53.248 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:53.248 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:53.248 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:53.248 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:53.248 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:53.248 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:53.248 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:53.248 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:53.248 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:53.248 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:53.248 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:53.248 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:53.248 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:53.248 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:53.248 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:53.248 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:53.248 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:53.248 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:53.248 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:53.249 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:53.249 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:53.249 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:53.249 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:53.249 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:53.249 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:53.249 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:53.249 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:53.249 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:53.249 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:53.249 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:53.249 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:53.249 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:53.249 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:53.249 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:53.249 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:53.249 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:53.249 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:53.249 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:53.249 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:53.249 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:53.249 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:53.249 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:53.249 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:53.249 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:53.249 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:53.249 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:53.249 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:53.511 17:30:53 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:53.511 17:30:53 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:53.511 00:03:53.511 real 0m24.953s 00:03:53.511 user 6m8.603s 00:03:53.511 sys 3m20.212s 00:03:53.511 17:30:53 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:53.511 17:30:53 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:53.511 ************************************ 00:03:53.511 END TEST build_native_dpdk 00:03:53.511 ************************************ 00:03:53.511 17:30:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:53.511 17:30:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:53.511 17:30:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:53.511 17:30:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:53.511 17:30:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:53.511 17:30:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:53.511 17:30:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:53.511 17:30:53 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:53.511 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:53.772 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:53.772 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:53.773 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:54.346 Using 'verbs' RDMA provider 00:04:10.263 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:22.500 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:23.071 Creating mk/config.mk...done. 00:04:23.071 Creating mk/cc.flags.mk...done. 00:04:23.071 Type 'make' to build. 00:04:23.071 17:31:22 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:04:23.071 17:31:22 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:23.071 17:31:22 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:23.071 17:31:22 -- common/autotest_common.sh@10 -- $ set +x 00:04:23.071 ************************************ 00:04:23.071 START TEST make 00:04:23.071 ************************************ 00:04:23.071 17:31:22 make -- common/autotest_common.sh@1125 -- $ make -j144 00:04:23.644 make[1]: Nothing to be done for 'all'. 00:04:25.025 The Meson build system 00:04:25.025 Version: 1.5.0 00:04:25.025 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:25.025 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:25.025 Build type: native build 00:04:25.026 Project name: libvfio-user 00:04:25.026 Project version: 0.0.1 00:04:25.026 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:25.026 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:25.026 Host machine cpu family: x86_64 00:04:25.026 Host machine cpu: x86_64 00:04:25.026 Run-time dependency threads found: YES 00:04:25.026 Library dl found: YES 00:04:25.026 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:25.026 Run-time dependency json-c found: YES 0.17 00:04:25.026 Run-time dependency cmocka found: YES 1.1.7 00:04:25.026 Program pytest-3 found: NO 00:04:25.026 Program flake8 found: NO 00:04:25.026 Program misspell-fixer found: NO 00:04:25.026 Program restructuredtext-lint found: NO 00:04:25.026 Program valgrind found: YES (/usr/bin/valgrind) 00:04:25.026 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:25.026 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:25.026 Compiler for C supports arguments -Wwrite-strings: YES 00:04:25.026 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:25.026 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:25.026 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:25.026 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:25.026 Build targets in project: 8 00:04:25.026 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:25.026 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:25.026 00:04:25.026 libvfio-user 0.0.1 00:04:25.026 00:04:25.026 User defined options 00:04:25.026 buildtype : debug 00:04:25.026 default_library: shared 00:04:25.026 libdir : /usr/local/lib 00:04:25.026 00:04:25.026 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:25.284 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:25.543 [1/37] Compiling C object samples/null.p/null.c.o 00:04:25.543 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:25.543 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:25.543 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:25.543 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:25.543 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:25.543 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:25.543 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:25.543 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:25.543 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:25.543 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:25.543 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:25.543 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:25.543 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:25.543 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:25.543 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:25.543 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:25.543 [18/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:25.543 [19/37] Compiling C object samples/server.p/server.c.o 00:04:25.543 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:25.543 [21/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:25.543 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:25.543 [23/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:25.543 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:25.543 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:25.543 [26/37] Compiling C object samples/client.p/client.c.o 00:04:25.543 [27/37] Linking target samples/client 00:04:25.543 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:25.543 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:25.801 [30/37] Linking target test/unit_tests 00:04:25.801 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:04:25.801 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:25.801 [33/37] Linking target samples/gpio-pci-idio-16 00:04:25.801 [34/37] Linking target samples/lspci 00:04:25.801 [35/37] Linking target samples/shadow_ioeventfd_server 00:04:25.801 [36/37] Linking target samples/null 00:04:25.801 [37/37] Linking target samples/server 00:04:25.801 INFO: autodetecting backend as ninja 00:04:25.801 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:26.061 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:26.321 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:26.321 ninja: no work to do. 00:04:52.886 CC lib/ut_mock/mock.o 00:04:52.886 CC lib/log/log.o 00:04:52.886 CC lib/log/log_flags.o 00:04:52.886 CC lib/ut/ut.o 00:04:52.886 CC lib/log/log_deprecated.o 00:04:52.886 LIB libspdk_log.a 00:04:52.886 LIB libspdk_ut.a 00:04:52.886 LIB libspdk_ut_mock.a 00:04:52.886 SO libspdk_ut_mock.so.6.0 00:04:52.886 SO libspdk_ut.so.2.0 00:04:52.886 SO libspdk_log.so.7.0 00:04:52.886 SYMLINK libspdk_ut_mock.so 00:04:52.886 SYMLINK libspdk_ut.so 00:04:52.886 SYMLINK libspdk_log.so 00:04:52.886 CC lib/dma/dma.o 00:04:52.886 CC lib/ioat/ioat.o 00:04:52.886 CXX lib/trace_parser/trace.o 00:04:52.886 CC lib/util/base64.o 00:04:52.886 CC lib/util/bit_array.o 00:04:52.886 CC lib/util/cpuset.o 00:04:52.886 CC lib/util/crc16.o 00:04:52.886 CC lib/util/crc32.o 00:04:52.886 CC lib/util/crc32c.o 00:04:52.886 CC lib/util/crc32_ieee.o 00:04:52.886 CC lib/util/crc64.o 00:04:52.886 CC lib/util/dif.o 00:04:52.886 CC lib/util/fd.o 00:04:52.886 CC lib/util/fd_group.o 00:04:52.886 CC lib/util/file.o 00:04:52.886 CC lib/util/hexlify.o 00:04:52.886 CC lib/util/iov.o 00:04:52.886 CC lib/util/math.o 00:04:52.886 CC lib/util/net.o 00:04:52.886 CC lib/util/pipe.o 00:04:52.886 CC lib/util/strerror_tls.o 00:04:52.886 CC lib/util/string.o 00:04:52.886 CC lib/util/uuid.o 00:04:52.886 CC lib/util/xor.o 00:04:52.886 CC lib/util/zipf.o 00:04:52.886 CC lib/util/md5.o 00:04:52.886 CC lib/vfio_user/host/vfio_user_pci.o 00:04:52.886 CC lib/vfio_user/host/vfio_user.o 00:04:52.886 LIB libspdk_dma.a 00:04:52.886 SO libspdk_dma.so.5.0 00:04:52.886 LIB libspdk_ioat.a 00:04:52.886 SO libspdk_ioat.so.7.0 00:04:52.886 SYMLINK libspdk_dma.so 00:04:52.886 SYMLINK libspdk_ioat.so 00:04:52.886 LIB libspdk_vfio_user.a 00:04:52.886 SO libspdk_vfio_user.so.5.0 00:04:52.886 LIB libspdk_util.a 00:04:52.886 SYMLINK libspdk_vfio_user.so 00:04:52.886 SO libspdk_util.so.10.0 00:04:52.886 SYMLINK libspdk_util.so 00:04:52.886 LIB libspdk_trace_parser.a 00:04:52.886 SO libspdk_trace_parser.so.6.0 00:04:52.886 SYMLINK libspdk_trace_parser.so 00:04:52.886 CC lib/conf/conf.o 00:04:52.886 CC lib/rdma_provider/common.o 00:04:52.886 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:52.886 CC lib/rdma_utils/rdma_utils.o 00:04:52.886 CC lib/json/json_parse.o 00:04:52.886 CC lib/idxd/idxd.o 00:04:52.886 CC lib/env_dpdk/env.o 00:04:52.887 CC lib/json/json_util.o 00:04:52.887 CC lib/idxd/idxd_user.o 00:04:52.887 CC lib/vmd/vmd.o 00:04:52.887 CC lib/json/json_write.o 00:04:52.887 CC lib/env_dpdk/memory.o 00:04:52.887 CC lib/idxd/idxd_kernel.o 00:04:52.887 CC lib/vmd/led.o 00:04:52.887 CC lib/env_dpdk/pci.o 00:04:52.887 CC lib/env_dpdk/init.o 00:04:52.887 CC lib/env_dpdk/threads.o 00:04:52.887 CC lib/env_dpdk/pci_ioat.o 00:04:52.887 CC lib/env_dpdk/pci_virtio.o 00:04:52.887 CC lib/env_dpdk/pci_vmd.o 00:04:52.887 CC lib/env_dpdk/pci_idxd.o 00:04:52.887 CC lib/env_dpdk/pci_event.o 00:04:52.887 CC lib/env_dpdk/sigbus_handler.o 00:04:52.887 CC lib/env_dpdk/pci_dpdk.o 00:04:52.887 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:52.887 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:52.887 LIB libspdk_rdma_provider.a 00:04:52.887 SO libspdk_rdma_provider.so.6.0 00:04:52.887 LIB libspdk_conf.a 00:04:52.887 LIB libspdk_rdma_utils.a 00:04:52.887 SO libspdk_conf.so.6.0 00:04:52.887 SYMLINK libspdk_rdma_provider.so 00:04:52.887 SO libspdk_rdma_utils.so.1.0 00:04:52.887 LIB libspdk_json.a 00:04:52.887 SYMLINK libspdk_conf.so 00:04:52.887 SO libspdk_json.so.6.0 00:04:52.887 SYMLINK libspdk_rdma_utils.so 00:04:52.887 SYMLINK libspdk_json.so 00:04:52.887 LIB libspdk_vmd.a 00:04:52.887 SO libspdk_vmd.so.6.0 00:04:52.887 LIB libspdk_idxd.a 00:04:52.887 SO libspdk_idxd.so.12.1 00:04:52.887 SYMLINK libspdk_vmd.so 00:04:52.887 SYMLINK libspdk_idxd.so 00:04:52.887 CC lib/jsonrpc/jsonrpc_server.o 00:04:52.887 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:52.887 CC lib/jsonrpc/jsonrpc_client.o 00:04:52.887 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:52.887 LIB libspdk_jsonrpc.a 00:04:52.887 SO libspdk_jsonrpc.so.6.0 00:04:52.887 SYMLINK libspdk_jsonrpc.so 00:04:52.887 LIB libspdk_env_dpdk.a 00:04:52.887 SO libspdk_env_dpdk.so.15.0 00:04:52.887 SYMLINK libspdk_env_dpdk.so 00:04:52.887 CC lib/rpc/rpc.o 00:04:52.887 LIB libspdk_rpc.a 00:04:52.887 SO libspdk_rpc.so.6.0 00:04:52.887 SYMLINK libspdk_rpc.so 00:04:52.887 CC lib/notify/notify.o 00:04:52.887 CC lib/trace/trace.o 00:04:52.887 CC lib/trace/trace_flags.o 00:04:52.887 CC lib/notify/notify_rpc.o 00:04:52.887 CC lib/trace/trace_rpc.o 00:04:52.887 CC lib/keyring/keyring.o 00:04:52.887 CC lib/keyring/keyring_rpc.o 00:04:52.887 LIB libspdk_notify.a 00:04:52.887 SO libspdk_notify.so.6.0 00:04:52.887 LIB libspdk_keyring.a 00:04:52.887 LIB libspdk_trace.a 00:04:52.887 SYMLINK libspdk_notify.so 00:04:52.887 SO libspdk_keyring.so.2.0 00:04:52.887 SO libspdk_trace.so.11.0 00:04:52.887 SYMLINK libspdk_keyring.so 00:04:52.887 SYMLINK libspdk_trace.so 00:04:53.147 CC lib/sock/sock.o 00:04:53.147 CC lib/sock/sock_rpc.o 00:04:53.147 CC lib/thread/thread.o 00:04:53.147 CC lib/thread/iobuf.o 00:04:53.408 LIB libspdk_sock.a 00:04:53.669 SO libspdk_sock.so.10.0 00:04:53.669 SYMLINK libspdk_sock.so 00:04:53.930 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:53.930 CC lib/nvme/nvme_ctrlr.o 00:04:53.930 CC lib/nvme/nvme_fabric.o 00:04:53.930 CC lib/nvme/nvme_ns_cmd.o 00:04:53.930 CC lib/nvme/nvme_ns.o 00:04:53.930 CC lib/nvme/nvme_pcie_common.o 00:04:53.930 CC lib/nvme/nvme_pcie.o 00:04:53.930 CC lib/nvme/nvme_qpair.o 00:04:53.930 CC lib/nvme/nvme.o 00:04:53.930 CC lib/nvme/nvme_quirks.o 00:04:53.930 CC lib/nvme/nvme_transport.o 00:04:53.930 CC lib/nvme/nvme_discovery.o 00:04:53.930 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:53.930 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:53.930 CC lib/nvme/nvme_tcp.o 00:04:53.930 CC lib/nvme/nvme_opal.o 00:04:53.930 CC lib/nvme/nvme_io_msg.o 00:04:53.930 CC lib/nvme/nvme_poll_group.o 00:04:53.930 CC lib/nvme/nvme_zns.o 00:04:53.930 CC lib/nvme/nvme_stubs.o 00:04:53.930 CC lib/nvme/nvme_auth.o 00:04:53.930 CC lib/nvme/nvme_cuse.o 00:04:53.930 CC lib/nvme/nvme_vfio_user.o 00:04:53.930 CC lib/nvme/nvme_rdma.o 00:04:54.502 LIB libspdk_thread.a 00:04:54.502 SO libspdk_thread.so.10.1 00:04:54.502 SYMLINK libspdk_thread.so 00:04:55.073 CC lib/fsdev/fsdev.o 00:04:55.073 CC lib/fsdev/fsdev_io.o 00:04:55.073 CC lib/blob/blobstore.o 00:04:55.073 CC lib/fsdev/fsdev_rpc.o 00:04:55.073 CC lib/blob/request.o 00:04:55.073 CC lib/blob/zeroes.o 00:04:55.073 CC lib/blob/blob_bs_dev.o 00:04:55.073 CC lib/accel/accel.o 00:04:55.073 CC lib/accel/accel_rpc.o 00:04:55.073 CC lib/vfu_tgt/tgt_endpoint.o 00:04:55.073 CC lib/init/json_config.o 00:04:55.073 CC lib/accel/accel_sw.o 00:04:55.073 CC lib/vfu_tgt/tgt_rpc.o 00:04:55.073 CC lib/virtio/virtio.o 00:04:55.073 CC lib/init/subsystem.o 00:04:55.073 CC lib/init/subsystem_rpc.o 00:04:55.073 CC lib/virtio/virtio_vhost_user.o 00:04:55.073 CC lib/init/rpc.o 00:04:55.073 CC lib/virtio/virtio_vfio_user.o 00:04:55.073 CC lib/virtio/virtio_pci.o 00:04:55.073 LIB libspdk_init.a 00:04:55.334 SO libspdk_init.so.6.0 00:04:55.334 LIB libspdk_vfu_tgt.a 00:04:55.334 LIB libspdk_virtio.a 00:04:55.334 SO libspdk_vfu_tgt.so.3.0 00:04:55.334 SO libspdk_virtio.so.7.0 00:04:55.334 SYMLINK libspdk_init.so 00:04:55.334 SYMLINK libspdk_vfu_tgt.so 00:04:55.334 SYMLINK libspdk_virtio.so 00:04:55.595 LIB libspdk_fsdev.a 00:04:55.595 SO libspdk_fsdev.so.1.0 00:04:55.595 SYMLINK libspdk_fsdev.so 00:04:55.595 CC lib/event/app.o 00:04:55.595 CC lib/event/reactor.o 00:04:55.595 CC lib/event/log_rpc.o 00:04:55.595 CC lib/event/app_rpc.o 00:04:55.595 CC lib/event/scheduler_static.o 00:04:55.857 LIB libspdk_accel.a 00:04:55.857 SO libspdk_accel.so.16.0 00:04:55.857 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:55.857 LIB libspdk_nvme.a 00:04:56.118 SYMLINK libspdk_accel.so 00:04:56.118 LIB libspdk_event.a 00:04:56.118 SO libspdk_nvme.so.14.0 00:04:56.118 SO libspdk_event.so.14.0 00:04:56.118 SYMLINK libspdk_event.so 00:04:56.379 SYMLINK libspdk_nvme.so 00:04:56.379 CC lib/bdev/bdev.o 00:04:56.379 CC lib/bdev/bdev_rpc.o 00:04:56.379 CC lib/bdev/bdev_zone.o 00:04:56.379 CC lib/bdev/part.o 00:04:56.379 CC lib/bdev/scsi_nvme.o 00:04:56.672 LIB libspdk_fuse_dispatcher.a 00:04:56.672 SO libspdk_fuse_dispatcher.so.1.0 00:04:56.672 SYMLINK libspdk_fuse_dispatcher.so 00:04:57.682 LIB libspdk_blob.a 00:04:57.682 SO libspdk_blob.so.11.0 00:04:57.682 SYMLINK libspdk_blob.so 00:04:57.944 CC lib/blobfs/blobfs.o 00:04:57.944 CC lib/blobfs/tree.o 00:04:57.944 CC lib/lvol/lvol.o 00:04:58.890 LIB libspdk_bdev.a 00:04:58.890 SO libspdk_bdev.so.16.0 00:04:58.890 LIB libspdk_blobfs.a 00:04:58.890 SO libspdk_blobfs.so.10.0 00:04:58.890 SYMLINK libspdk_bdev.so 00:04:58.890 LIB libspdk_lvol.a 00:04:58.890 SYMLINK libspdk_blobfs.so 00:04:58.890 SO libspdk_lvol.so.10.0 00:04:58.890 SYMLINK libspdk_lvol.so 00:04:59.150 CC lib/nvmf/ctrlr.o 00:04:59.150 CC lib/nvmf/ctrlr_discovery.o 00:04:59.150 CC lib/ublk/ublk.o 00:04:59.150 CC lib/nvmf/ctrlr_bdev.o 00:04:59.150 CC lib/ublk/ublk_rpc.o 00:04:59.150 CC lib/nvmf/subsystem.o 00:04:59.150 CC lib/nvmf/nvmf.o 00:04:59.150 CC lib/scsi/dev.o 00:04:59.150 CC lib/nvmf/nvmf_rpc.o 00:04:59.150 CC lib/scsi/lun.o 00:04:59.150 CC lib/nvmf/transport.o 00:04:59.150 CC lib/scsi/port.o 00:04:59.150 CC lib/nvmf/tcp.o 00:04:59.150 CC lib/nvmf/stubs.o 00:04:59.150 CC lib/scsi/scsi.o 00:04:59.150 CC lib/nbd/nbd.o 00:04:59.150 CC lib/ftl/ftl_core.o 00:04:59.150 CC lib/nvmf/mdns_server.o 00:04:59.150 CC lib/scsi/scsi_bdev.o 00:04:59.150 CC lib/ftl/ftl_init.o 00:04:59.150 CC lib/nvmf/vfio_user.o 00:04:59.150 CC lib/scsi/scsi_pr.o 00:04:59.150 CC lib/nbd/nbd_rpc.o 00:04:59.150 CC lib/nvmf/rdma.o 00:04:59.150 CC lib/scsi/scsi_rpc.o 00:04:59.150 CC lib/nvmf/auth.o 00:04:59.150 CC lib/ftl/ftl_layout.o 00:04:59.150 CC lib/ftl/ftl_debug.o 00:04:59.150 CC lib/scsi/task.o 00:04:59.150 CC lib/ftl/ftl_io.o 00:04:59.150 CC lib/ftl/ftl_sb.o 00:04:59.150 CC lib/ftl/ftl_l2p.o 00:04:59.150 CC lib/ftl/ftl_l2p_flat.o 00:04:59.150 CC lib/ftl/ftl_nv_cache.o 00:04:59.150 CC lib/ftl/ftl_band.o 00:04:59.150 CC lib/ftl/ftl_band_ops.o 00:04:59.150 CC lib/ftl/ftl_writer.o 00:04:59.150 CC lib/ftl/ftl_rq.o 00:04:59.150 CC lib/ftl/ftl_reloc.o 00:04:59.150 CC lib/ftl/ftl_l2p_cache.o 00:04:59.150 CC lib/ftl/ftl_p2l.o 00:04:59.150 CC lib/ftl/ftl_p2l_log.o 00:04:59.150 CC lib/ftl/mngt/ftl_mngt.o 00:04:59.150 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:59.150 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:59.150 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:59.150 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:59.150 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:59.150 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:59.150 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:59.150 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:59.150 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:59.150 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:59.150 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:59.150 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:59.150 CC lib/ftl/utils/ftl_conf.o 00:04:59.150 CC lib/ftl/utils/ftl_md.o 00:04:59.150 CC lib/ftl/utils/ftl_mempool.o 00:04:59.150 CC lib/ftl/utils/ftl_bitmap.o 00:04:59.150 CC lib/ftl/utils/ftl_property.o 00:04:59.150 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:59.150 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:59.150 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:59.150 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:59.150 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:59.150 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:59.150 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:59.150 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:59.410 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:59.411 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:59.411 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:59.411 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:59.411 CC lib/ftl/base/ftl_base_dev.o 00:04:59.411 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:59.411 CC lib/ftl/ftl_trace.o 00:04:59.411 CC lib/ftl/base/ftl_base_bdev.o 00:04:59.983 LIB libspdk_scsi.a 00:04:59.983 SO libspdk_scsi.so.9.0 00:04:59.983 LIB libspdk_nbd.a 00:04:59.983 SO libspdk_nbd.so.7.0 00:05:00.244 SYMLINK libspdk_scsi.so 00:05:00.244 SYMLINK libspdk_nbd.so 00:05:00.244 LIB libspdk_ublk.a 00:05:00.244 SO libspdk_ublk.so.3.0 00:05:00.244 SYMLINK libspdk_ublk.so 00:05:00.505 LIB libspdk_ftl.a 00:05:00.505 CC lib/iscsi/init_grp.o 00:05:00.505 CC lib/vhost/vhost.o 00:05:00.505 CC lib/iscsi/conn.o 00:05:00.505 CC lib/vhost/vhost_rpc.o 00:05:00.505 CC lib/vhost/vhost_scsi.o 00:05:00.505 CC lib/iscsi/iscsi.o 00:05:00.505 CC lib/vhost/vhost_blk.o 00:05:00.505 CC lib/iscsi/param.o 00:05:00.505 CC lib/iscsi/portal_grp.o 00:05:00.505 CC lib/vhost/rte_vhost_user.o 00:05:00.505 CC lib/iscsi/tgt_node.o 00:05:00.505 CC lib/iscsi/iscsi_subsystem.o 00:05:00.505 CC lib/iscsi/iscsi_rpc.o 00:05:00.505 CC lib/iscsi/task.o 00:05:00.766 SO libspdk_ftl.so.9.0 00:05:01.027 SYMLINK libspdk_ftl.so 00:05:01.289 LIB libspdk_nvmf.a 00:05:01.549 SO libspdk_nvmf.so.19.0 00:05:01.549 LIB libspdk_vhost.a 00:05:01.549 SO libspdk_vhost.so.8.0 00:05:01.549 SYMLINK libspdk_vhost.so 00:05:01.549 SYMLINK libspdk_nvmf.so 00:05:01.809 LIB libspdk_iscsi.a 00:05:01.810 SO libspdk_iscsi.so.8.0 00:05:02.070 SYMLINK libspdk_iscsi.so 00:05:02.643 CC module/env_dpdk/env_dpdk_rpc.o 00:05:02.643 CC module/vfu_device/vfu_virtio.o 00:05:02.643 CC module/vfu_device/vfu_virtio_blk.o 00:05:02.643 CC module/vfu_device/vfu_virtio_scsi.o 00:05:02.643 CC module/vfu_device/vfu_virtio_rpc.o 00:05:02.643 CC module/vfu_device/vfu_virtio_fs.o 00:05:02.643 CC module/sock/posix/posix.o 00:05:02.643 CC module/accel/ioat/accel_ioat_rpc.o 00:05:02.643 CC module/accel/ioat/accel_ioat.o 00:05:02.643 LIB libspdk_env_dpdk_rpc.a 00:05:02.643 CC module/fsdev/aio/fsdev_aio.o 00:05:02.643 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:02.643 CC module/fsdev/aio/linux_aio_mgr.o 00:05:02.643 CC module/scheduler/gscheduler/gscheduler.o 00:05:02.643 CC module/accel/error/accel_error.o 00:05:02.643 CC module/accel/error/accel_error_rpc.o 00:05:02.643 CC module/accel/iaa/accel_iaa.o 00:05:02.643 CC module/keyring/linux/keyring.o 00:05:02.643 CC module/accel/dsa/accel_dsa.o 00:05:02.643 CC module/accel/iaa/accel_iaa_rpc.o 00:05:02.643 CC module/accel/dsa/accel_dsa_rpc.o 00:05:02.643 CC module/keyring/linux/keyring_rpc.o 00:05:02.643 CC module/keyring/file/keyring.o 00:05:02.643 CC module/keyring/file/keyring_rpc.o 00:05:02.643 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:02.643 CC module/blob/bdev/blob_bdev.o 00:05:02.643 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:02.643 SO libspdk_env_dpdk_rpc.so.6.0 00:05:02.904 SYMLINK libspdk_env_dpdk_rpc.so 00:05:02.904 LIB libspdk_scheduler_gscheduler.a 00:05:02.904 LIB libspdk_keyring_file.a 00:05:02.904 LIB libspdk_keyring_linux.a 00:05:02.904 SO libspdk_scheduler_gscheduler.so.4.0 00:05:02.904 LIB libspdk_scheduler_dpdk_governor.a 00:05:02.905 LIB libspdk_accel_ioat.a 00:05:02.905 LIB libspdk_scheduler_dynamic.a 00:05:02.905 LIB libspdk_accel_error.a 00:05:02.905 SO libspdk_keyring_file.so.2.0 00:05:02.905 SO libspdk_keyring_linux.so.1.0 00:05:02.905 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:02.905 SO libspdk_accel_ioat.so.6.0 00:05:02.905 LIB libspdk_accel_iaa.a 00:05:02.905 SO libspdk_scheduler_dynamic.so.4.0 00:05:02.905 SO libspdk_accel_error.so.2.0 00:05:02.905 SYMLINK libspdk_scheduler_gscheduler.so 00:05:02.905 LIB libspdk_accel_dsa.a 00:05:02.905 LIB libspdk_blob_bdev.a 00:05:03.165 SO libspdk_accel_iaa.so.3.0 00:05:03.165 SYMLINK libspdk_keyring_file.so 00:05:03.165 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:03.165 SYMLINK libspdk_keyring_linux.so 00:05:03.165 SYMLINK libspdk_accel_ioat.so 00:05:03.165 SYMLINK libspdk_scheduler_dynamic.so 00:05:03.165 SO libspdk_blob_bdev.so.11.0 00:05:03.165 SO libspdk_accel_dsa.so.5.0 00:05:03.165 SYMLINK libspdk_accel_error.so 00:05:03.165 SYMLINK libspdk_accel_iaa.so 00:05:03.165 SYMLINK libspdk_blob_bdev.so 00:05:03.165 LIB libspdk_vfu_device.a 00:05:03.165 SYMLINK libspdk_accel_dsa.so 00:05:03.165 SO libspdk_vfu_device.so.3.0 00:05:03.165 SYMLINK libspdk_vfu_device.so 00:05:03.426 LIB libspdk_fsdev_aio.a 00:05:03.426 SO libspdk_fsdev_aio.so.1.0 00:05:03.426 LIB libspdk_sock_posix.a 00:05:03.426 SYMLINK libspdk_fsdev_aio.so 00:05:03.426 SO libspdk_sock_posix.so.6.0 00:05:03.426 SYMLINK libspdk_sock_posix.so 00:05:03.688 CC module/bdev/error/vbdev_error.o 00:05:03.688 CC module/blobfs/bdev/blobfs_bdev.o 00:05:03.688 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:03.688 CC module/bdev/error/vbdev_error_rpc.o 00:05:03.688 CC module/bdev/null/bdev_null.o 00:05:03.688 CC module/bdev/null/bdev_null_rpc.o 00:05:03.688 CC module/bdev/lvol/vbdev_lvol.o 00:05:03.688 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:03.688 CC module/bdev/gpt/gpt.o 00:05:03.688 CC module/bdev/aio/bdev_aio.o 00:05:03.688 CC module/bdev/iscsi/bdev_iscsi.o 00:05:03.688 CC module/bdev/aio/bdev_aio_rpc.o 00:05:03.688 CC module/bdev/gpt/vbdev_gpt.o 00:05:03.688 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:03.688 CC module/bdev/delay/vbdev_delay.o 00:05:03.688 CC module/bdev/split/vbdev_split.o 00:05:03.688 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:03.688 CC module/bdev/split/vbdev_split_rpc.o 00:05:03.688 CC module/bdev/malloc/bdev_malloc.o 00:05:03.688 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:03.688 CC module/bdev/nvme/bdev_nvme.o 00:05:03.688 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:03.688 CC module/bdev/nvme/nvme_rpc.o 00:05:03.688 CC module/bdev/ftl/bdev_ftl.o 00:05:03.688 CC module/bdev/nvme/bdev_mdns_client.o 00:05:03.688 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:03.688 CC module/bdev/nvme/vbdev_opal.o 00:05:03.688 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:03.688 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:03.688 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:03.688 CC module/bdev/raid/bdev_raid.o 00:05:03.688 CC module/bdev/passthru/vbdev_passthru.o 00:05:03.688 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:03.688 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:03.688 CC module/bdev/raid/bdev_raid_rpc.o 00:05:03.688 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:03.688 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:03.688 CC module/bdev/raid/bdev_raid_sb.o 00:05:03.688 CC module/bdev/raid/raid0.o 00:05:03.688 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:03.688 CC module/bdev/raid/raid1.o 00:05:03.688 CC module/bdev/raid/concat.o 00:05:03.949 LIB libspdk_blobfs_bdev.a 00:05:03.949 SO libspdk_blobfs_bdev.so.6.0 00:05:03.949 LIB libspdk_bdev_error.a 00:05:04.211 LIB libspdk_bdev_gpt.a 00:05:04.211 LIB libspdk_bdev_null.a 00:05:04.211 LIB libspdk_bdev_split.a 00:05:04.211 SO libspdk_bdev_error.so.6.0 00:05:04.211 SO libspdk_bdev_null.so.6.0 00:05:04.211 LIB libspdk_bdev_ftl.a 00:05:04.211 SYMLINK libspdk_blobfs_bdev.so 00:05:04.211 SO libspdk_bdev_gpt.so.6.0 00:05:04.211 SO libspdk_bdev_split.so.6.0 00:05:04.211 LIB libspdk_bdev_passthru.a 00:05:04.211 SO libspdk_bdev_ftl.so.6.0 00:05:04.211 LIB libspdk_bdev_delay.a 00:05:04.211 SYMLINK libspdk_bdev_error.so 00:05:04.211 SYMLINK libspdk_bdev_null.so 00:05:04.211 SYMLINK libspdk_bdev_gpt.so 00:05:04.211 LIB libspdk_bdev_iscsi.a 00:05:04.211 LIB libspdk_bdev_malloc.a 00:05:04.211 LIB libspdk_bdev_aio.a 00:05:04.211 LIB libspdk_bdev_zone_block.a 00:05:04.211 SYMLINK libspdk_bdev_split.so 00:05:04.211 SO libspdk_bdev_passthru.so.6.0 00:05:04.211 SO libspdk_bdev_zone_block.so.6.0 00:05:04.211 SO libspdk_bdev_delay.so.6.0 00:05:04.211 SO libspdk_bdev_malloc.so.6.0 00:05:04.211 SO libspdk_bdev_iscsi.so.6.0 00:05:04.211 SO libspdk_bdev_aio.so.6.0 00:05:04.211 SYMLINK libspdk_bdev_ftl.so 00:05:04.211 SYMLINK libspdk_bdev_zone_block.so 00:05:04.211 SYMLINK libspdk_bdev_passthru.so 00:05:04.211 SYMLINK libspdk_bdev_aio.so 00:05:04.211 SYMLINK libspdk_bdev_delay.so 00:05:04.211 SYMLINK libspdk_bdev_iscsi.so 00:05:04.211 SYMLINK libspdk_bdev_malloc.so 00:05:04.211 LIB libspdk_bdev_lvol.a 00:05:04.473 LIB libspdk_bdev_virtio.a 00:05:04.473 SO libspdk_bdev_lvol.so.6.0 00:05:04.473 SO libspdk_bdev_virtio.so.6.0 00:05:04.473 SYMLINK libspdk_bdev_lvol.so 00:05:04.473 SYMLINK libspdk_bdev_virtio.so 00:05:04.734 LIB libspdk_bdev_raid.a 00:05:04.734 SO libspdk_bdev_raid.so.6.0 00:05:04.996 SYMLINK libspdk_bdev_raid.so 00:05:05.940 LIB libspdk_bdev_nvme.a 00:05:05.940 SO libspdk_bdev_nvme.so.7.0 00:05:05.940 SYMLINK libspdk_bdev_nvme.so 00:05:06.883 CC module/event/subsystems/iobuf/iobuf.o 00:05:06.883 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:06.883 CC module/event/subsystems/vmd/vmd.o 00:05:06.883 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:06.883 CC module/event/subsystems/sock/sock.o 00:05:06.883 CC module/event/subsystems/keyring/keyring.o 00:05:06.883 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:06.883 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:06.883 CC module/event/subsystems/scheduler/scheduler.o 00:05:06.883 CC module/event/subsystems/fsdev/fsdev.o 00:05:06.883 LIB libspdk_event_keyring.a 00:05:06.883 LIB libspdk_event_vmd.a 00:05:06.883 LIB libspdk_event_sock.a 00:05:06.883 LIB libspdk_event_vhost_blk.a 00:05:06.883 LIB libspdk_event_iobuf.a 00:05:06.883 LIB libspdk_event_fsdev.a 00:05:06.883 LIB libspdk_event_scheduler.a 00:05:06.883 LIB libspdk_event_vfu_tgt.a 00:05:06.883 SO libspdk_event_keyring.so.1.0 00:05:06.883 SO libspdk_event_sock.so.5.0 00:05:06.883 SO libspdk_event_vmd.so.6.0 00:05:06.883 SO libspdk_event_vhost_blk.so.3.0 00:05:06.883 SO libspdk_event_scheduler.so.4.0 00:05:06.883 SO libspdk_event_fsdev.so.1.0 00:05:06.883 SO libspdk_event_vfu_tgt.so.3.0 00:05:06.883 SO libspdk_event_iobuf.so.3.0 00:05:07.144 SYMLINK libspdk_event_keyring.so 00:05:07.144 SYMLINK libspdk_event_sock.so 00:05:07.144 SYMLINK libspdk_event_vmd.so 00:05:07.144 SYMLINK libspdk_event_vhost_blk.so 00:05:07.144 SYMLINK libspdk_event_scheduler.so 00:05:07.144 SYMLINK libspdk_event_fsdev.so 00:05:07.144 SYMLINK libspdk_event_iobuf.so 00:05:07.144 SYMLINK libspdk_event_vfu_tgt.so 00:05:07.405 CC module/event/subsystems/accel/accel.o 00:05:07.667 LIB libspdk_event_accel.a 00:05:07.667 SO libspdk_event_accel.so.6.0 00:05:07.667 SYMLINK libspdk_event_accel.so 00:05:07.928 CC module/event/subsystems/bdev/bdev.o 00:05:08.190 LIB libspdk_event_bdev.a 00:05:08.190 SO libspdk_event_bdev.so.6.0 00:05:08.451 SYMLINK libspdk_event_bdev.so 00:05:08.714 CC module/event/subsystems/nbd/nbd.o 00:05:08.714 CC module/event/subsystems/ublk/ublk.o 00:05:08.714 CC module/event/subsystems/scsi/scsi.o 00:05:08.714 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:08.714 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:08.975 LIB libspdk_event_nbd.a 00:05:08.975 LIB libspdk_event_ublk.a 00:05:08.975 LIB libspdk_event_scsi.a 00:05:08.975 SO libspdk_event_nbd.so.6.0 00:05:08.975 SO libspdk_event_ublk.so.3.0 00:05:08.975 SO libspdk_event_scsi.so.6.0 00:05:08.975 LIB libspdk_event_nvmf.a 00:05:08.975 SYMLINK libspdk_event_nbd.so 00:05:08.975 SYMLINK libspdk_event_ublk.so 00:05:08.975 SYMLINK libspdk_event_scsi.so 00:05:08.975 SO libspdk_event_nvmf.so.6.0 00:05:08.975 SYMLINK libspdk_event_nvmf.so 00:05:09.236 CC module/event/subsystems/iscsi/iscsi.o 00:05:09.236 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:09.496 LIB libspdk_event_vhost_scsi.a 00:05:09.496 LIB libspdk_event_iscsi.a 00:05:09.496 SO libspdk_event_vhost_scsi.so.3.0 00:05:09.496 SO libspdk_event_iscsi.so.6.0 00:05:09.496 SYMLINK libspdk_event_vhost_scsi.so 00:05:09.757 SYMLINK libspdk_event_iscsi.so 00:05:09.757 SO libspdk.so.6.0 00:05:09.757 SYMLINK libspdk.so 00:05:10.332 CC app/trace_record/trace_record.o 00:05:10.332 CXX app/trace/trace.o 00:05:10.332 CC test/rpc_client/rpc_client_test.o 00:05:10.332 CC app/spdk_top/spdk_top.o 00:05:10.332 CC app/spdk_nvme_identify/identify.o 00:05:10.332 CC app/spdk_lspci/spdk_lspci.o 00:05:10.332 CC app/spdk_nvme_perf/perf.o 00:05:10.332 TEST_HEADER include/spdk/accel.h 00:05:10.332 CC app/spdk_nvme_discover/discovery_aer.o 00:05:10.332 TEST_HEADER include/spdk/accel_module.h 00:05:10.332 TEST_HEADER include/spdk/assert.h 00:05:10.332 TEST_HEADER include/spdk/barrier.h 00:05:10.332 TEST_HEADER include/spdk/bdev.h 00:05:10.332 TEST_HEADER include/spdk/base64.h 00:05:10.332 TEST_HEADER include/spdk/bdev_module.h 00:05:10.332 TEST_HEADER include/spdk/bdev_zone.h 00:05:10.332 TEST_HEADER include/spdk/bit_array.h 00:05:10.332 TEST_HEADER include/spdk/bit_pool.h 00:05:10.332 TEST_HEADER include/spdk/blob_bdev.h 00:05:10.332 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:10.332 TEST_HEADER include/spdk/blob.h 00:05:10.332 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:10.332 TEST_HEADER include/spdk/blobfs.h 00:05:10.332 TEST_HEADER include/spdk/conf.h 00:05:10.332 TEST_HEADER include/spdk/config.h 00:05:10.332 TEST_HEADER include/spdk/cpuset.h 00:05:10.332 CC app/iscsi_tgt/iscsi_tgt.o 00:05:10.332 TEST_HEADER include/spdk/crc16.h 00:05:10.332 TEST_HEADER include/spdk/crc32.h 00:05:10.332 TEST_HEADER include/spdk/crc64.h 00:05:10.332 TEST_HEADER include/spdk/dif.h 00:05:10.332 TEST_HEADER include/spdk/endian.h 00:05:10.332 TEST_HEADER include/spdk/dma.h 00:05:10.332 TEST_HEADER include/spdk/env_dpdk.h 00:05:10.332 TEST_HEADER include/spdk/env.h 00:05:10.332 TEST_HEADER include/spdk/fd_group.h 00:05:10.332 TEST_HEADER include/spdk/event.h 00:05:10.333 TEST_HEADER include/spdk/file.h 00:05:10.333 TEST_HEADER include/spdk/fd.h 00:05:10.333 TEST_HEADER include/spdk/fsdev.h 00:05:10.333 CC app/nvmf_tgt/nvmf_main.o 00:05:10.333 TEST_HEADER include/spdk/fsdev_module.h 00:05:10.333 CC app/spdk_dd/spdk_dd.o 00:05:10.333 TEST_HEADER include/spdk/ftl.h 00:05:10.333 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:10.333 TEST_HEADER include/spdk/hexlify.h 00:05:10.333 TEST_HEADER include/spdk/gpt_spec.h 00:05:10.333 TEST_HEADER include/spdk/histogram_data.h 00:05:10.333 TEST_HEADER include/spdk/idxd.h 00:05:10.333 TEST_HEADER include/spdk/idxd_spec.h 00:05:10.333 TEST_HEADER include/spdk/init.h 00:05:10.333 TEST_HEADER include/spdk/ioat_spec.h 00:05:10.333 TEST_HEADER include/spdk/ioat.h 00:05:10.333 TEST_HEADER include/spdk/iscsi_spec.h 00:05:10.333 TEST_HEADER include/spdk/json.h 00:05:10.333 TEST_HEADER include/spdk/jsonrpc.h 00:05:10.333 TEST_HEADER include/spdk/keyring.h 00:05:10.333 TEST_HEADER include/spdk/likely.h 00:05:10.333 TEST_HEADER include/spdk/keyring_module.h 00:05:10.333 TEST_HEADER include/spdk/log.h 00:05:10.333 TEST_HEADER include/spdk/lvol.h 00:05:10.333 TEST_HEADER include/spdk/md5.h 00:05:10.333 TEST_HEADER include/spdk/memory.h 00:05:10.333 TEST_HEADER include/spdk/mmio.h 00:05:10.333 TEST_HEADER include/spdk/nbd.h 00:05:10.333 CC app/spdk_tgt/spdk_tgt.o 00:05:10.333 TEST_HEADER include/spdk/net.h 00:05:10.333 TEST_HEADER include/spdk/notify.h 00:05:10.333 TEST_HEADER include/spdk/nvme.h 00:05:10.333 TEST_HEADER include/spdk/nvme_intel.h 00:05:10.333 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:10.333 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:10.333 TEST_HEADER include/spdk/nvme_spec.h 00:05:10.333 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:10.333 TEST_HEADER include/spdk/nvme_zns.h 00:05:10.333 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:10.333 TEST_HEADER include/spdk/nvmf.h 00:05:10.333 TEST_HEADER include/spdk/nvmf_transport.h 00:05:10.333 TEST_HEADER include/spdk/nvmf_spec.h 00:05:10.333 TEST_HEADER include/spdk/opal.h 00:05:10.333 TEST_HEADER include/spdk/pci_ids.h 00:05:10.333 TEST_HEADER include/spdk/opal_spec.h 00:05:10.333 TEST_HEADER include/spdk/pipe.h 00:05:10.333 TEST_HEADER include/spdk/queue.h 00:05:10.333 TEST_HEADER include/spdk/reduce.h 00:05:10.333 TEST_HEADER include/spdk/scheduler.h 00:05:10.333 TEST_HEADER include/spdk/rpc.h 00:05:10.333 TEST_HEADER include/spdk/scsi_spec.h 00:05:10.333 TEST_HEADER include/spdk/scsi.h 00:05:10.333 TEST_HEADER include/spdk/sock.h 00:05:10.333 TEST_HEADER include/spdk/stdinc.h 00:05:10.333 TEST_HEADER include/spdk/string.h 00:05:10.333 TEST_HEADER include/spdk/thread.h 00:05:10.333 TEST_HEADER include/spdk/trace.h 00:05:10.333 TEST_HEADER include/spdk/trace_parser.h 00:05:10.333 TEST_HEADER include/spdk/tree.h 00:05:10.333 TEST_HEADER include/spdk/util.h 00:05:10.333 TEST_HEADER include/spdk/ublk.h 00:05:10.333 TEST_HEADER include/spdk/uuid.h 00:05:10.333 TEST_HEADER include/spdk/version.h 00:05:10.333 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:10.333 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:10.333 TEST_HEADER include/spdk/vhost.h 00:05:10.333 TEST_HEADER include/spdk/vmd.h 00:05:10.333 TEST_HEADER include/spdk/xor.h 00:05:10.333 TEST_HEADER include/spdk/zipf.h 00:05:10.333 CXX test/cpp_headers/accel.o 00:05:10.333 CXX test/cpp_headers/accel_module.o 00:05:10.333 CXX test/cpp_headers/assert.o 00:05:10.333 CXX test/cpp_headers/barrier.o 00:05:10.333 CXX test/cpp_headers/base64.o 00:05:10.333 CXX test/cpp_headers/bdev.o 00:05:10.333 CXX test/cpp_headers/bdev_module.o 00:05:10.333 CXX test/cpp_headers/bdev_zone.o 00:05:10.333 CXX test/cpp_headers/bit_pool.o 00:05:10.333 CXX test/cpp_headers/bit_array.o 00:05:10.333 CXX test/cpp_headers/blobfs_bdev.o 00:05:10.333 CXX test/cpp_headers/blob_bdev.o 00:05:10.333 CXX test/cpp_headers/blobfs.o 00:05:10.333 CXX test/cpp_headers/conf.o 00:05:10.333 CXX test/cpp_headers/blob.o 00:05:10.333 CXX test/cpp_headers/cpuset.o 00:05:10.333 CXX test/cpp_headers/config.o 00:05:10.333 CXX test/cpp_headers/crc16.o 00:05:10.333 CXX test/cpp_headers/crc32.o 00:05:10.333 CXX test/cpp_headers/crc64.o 00:05:10.333 CXX test/cpp_headers/dif.o 00:05:10.333 CXX test/cpp_headers/dma.o 00:05:10.333 CXX test/cpp_headers/env.o 00:05:10.333 CXX test/cpp_headers/endian.o 00:05:10.333 CXX test/cpp_headers/event.o 00:05:10.333 CXX test/cpp_headers/env_dpdk.o 00:05:10.333 CXX test/cpp_headers/fd_group.o 00:05:10.333 CXX test/cpp_headers/fd.o 00:05:10.333 CXX test/cpp_headers/file.o 00:05:10.333 CXX test/cpp_headers/fsdev.o 00:05:10.333 CXX test/cpp_headers/fsdev_module.o 00:05:10.333 CXX test/cpp_headers/ftl.o 00:05:10.333 CXX test/cpp_headers/fuse_dispatcher.o 00:05:10.333 CXX test/cpp_headers/gpt_spec.o 00:05:10.333 CXX test/cpp_headers/hexlify.o 00:05:10.333 CXX test/cpp_headers/histogram_data.o 00:05:10.333 CXX test/cpp_headers/idxd_spec.o 00:05:10.333 CXX test/cpp_headers/init.o 00:05:10.333 CXX test/cpp_headers/idxd.o 00:05:10.333 CXX test/cpp_headers/ioat.o 00:05:10.333 CXX test/cpp_headers/ioat_spec.o 00:05:10.333 CXX test/cpp_headers/json.o 00:05:10.333 CXX test/cpp_headers/iscsi_spec.o 00:05:10.600 CXX test/cpp_headers/jsonrpc.o 00:05:10.600 CXX test/cpp_headers/keyring.o 00:05:10.600 CXX test/cpp_headers/keyring_module.o 00:05:10.600 CXX test/cpp_headers/likely.o 00:05:10.600 CXX test/cpp_headers/log.o 00:05:10.600 CC test/app/jsoncat/jsoncat.o 00:05:10.600 CC examples/util/zipf/zipf.o 00:05:10.600 CXX test/cpp_headers/md5.o 00:05:10.600 CXX test/cpp_headers/lvol.o 00:05:10.600 CXX test/cpp_headers/mmio.o 00:05:10.600 CXX test/cpp_headers/memory.o 00:05:10.600 CXX test/cpp_headers/net.o 00:05:10.600 CXX test/cpp_headers/nbd.o 00:05:10.600 CC examples/ioat/perf/perf.o 00:05:10.600 CXX test/cpp_headers/notify.o 00:05:10.600 CXX test/cpp_headers/nvme.o 00:05:10.600 CXX test/cpp_headers/nvme_ocssd.o 00:05:10.600 CXX test/cpp_headers/nvme_intel.o 00:05:10.600 CXX test/cpp_headers/nvme_spec.o 00:05:10.600 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:10.600 CXX test/cpp_headers/nvme_zns.o 00:05:10.600 CXX test/cpp_headers/nvmf_cmd.o 00:05:10.600 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:10.600 CXX test/cpp_headers/nvmf.o 00:05:10.600 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:10.600 CXX test/cpp_headers/nvmf_spec.o 00:05:10.600 CXX test/cpp_headers/opal.o 00:05:10.600 CXX test/cpp_headers/nvmf_transport.o 00:05:10.600 CXX test/cpp_headers/opal_spec.o 00:05:10.600 CC test/thread/poller_perf/poller_perf.o 00:05:10.600 CXX test/cpp_headers/pci_ids.o 00:05:10.600 CC test/env/pci/pci_ut.o 00:05:10.600 CC examples/ioat/verify/verify.o 00:05:10.600 CXX test/cpp_headers/queue.o 00:05:10.600 CXX test/cpp_headers/pipe.o 00:05:10.600 LINK spdk_lspci 00:05:10.600 CXX test/cpp_headers/rpc.o 00:05:10.600 CXX test/cpp_headers/reduce.o 00:05:10.600 CXX test/cpp_headers/scsi.o 00:05:10.600 CXX test/cpp_headers/scheduler.o 00:05:10.600 CC test/app/histogram_perf/histogram_perf.o 00:05:10.600 CC test/env/memory/memory_ut.o 00:05:10.600 CXX test/cpp_headers/scsi_spec.o 00:05:10.600 CXX test/cpp_headers/stdinc.o 00:05:10.600 CXX test/cpp_headers/sock.o 00:05:10.600 CXX test/cpp_headers/string.o 00:05:10.600 CXX test/cpp_headers/thread.o 00:05:10.600 CC test/app/stub/stub.o 00:05:10.600 CC test/env/vtophys/vtophys.o 00:05:10.600 CXX test/cpp_headers/trace.o 00:05:10.600 CC app/fio/nvme/fio_plugin.o 00:05:10.600 CXX test/cpp_headers/trace_parser.o 00:05:10.600 CXX test/cpp_headers/ublk.o 00:05:10.600 CXX test/cpp_headers/tree.o 00:05:10.600 CC test/dma/test_dma/test_dma.o 00:05:10.600 CXX test/cpp_headers/util.o 00:05:10.600 CXX test/cpp_headers/vfio_user_pci.o 00:05:10.600 CXX test/cpp_headers/uuid.o 00:05:10.600 CXX test/cpp_headers/version.o 00:05:10.600 CXX test/cpp_headers/vfio_user_spec.o 00:05:10.600 CXX test/cpp_headers/vhost.o 00:05:10.600 CXX test/cpp_headers/vmd.o 00:05:10.600 CXX test/cpp_headers/xor.o 00:05:10.600 CXX test/cpp_headers/zipf.o 00:05:10.600 CC test/app/bdev_svc/bdev_svc.o 00:05:10.600 CC app/fio/bdev/fio_plugin.o 00:05:10.600 LINK rpc_client_test 00:05:10.871 LINK interrupt_tgt 00:05:10.871 LINK iscsi_tgt 00:05:10.871 LINK nvmf_tgt 00:05:10.871 LINK spdk_nvme_discover 00:05:11.136 LINK spdk_trace_record 00:05:11.136 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:11.136 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:11.136 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:11.136 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:11.136 CC test/env/mem_callbacks/mem_callbacks.o 00:05:11.395 LINK spdk_trace 00:05:11.396 LINK spdk_tgt 00:05:11.396 LINK zipf 00:05:11.396 LINK jsoncat 00:05:11.396 LINK poller_perf 00:05:11.396 LINK spdk_dd 00:05:11.396 LINK env_dpdk_post_init 00:05:11.656 LINK ioat_perf 00:05:11.656 LINK bdev_svc 00:05:11.656 LINK histogram_perf 00:05:11.656 LINK stub 00:05:11.656 LINK pci_ut 00:05:11.656 LINK verify 00:05:11.656 LINK vtophys 00:05:11.656 LINK mem_callbacks 00:05:11.916 LINK spdk_nvme_perf 00:05:11.916 CC app/vhost/vhost.o 00:05:11.916 LINK spdk_top 00:05:11.916 LINK vhost_fuzz 00:05:11.916 LINK nvme_fuzz 00:05:11.916 CC examples/vmd/lsvmd/lsvmd.o 00:05:11.916 CC examples/idxd/perf/perf.o 00:05:11.916 CC examples/sock/hello_world/hello_sock.o 00:05:11.916 CC examples/vmd/led/led.o 00:05:11.916 LINK spdk_nvme 00:05:11.916 LINK test_dma 00:05:12.176 CC examples/thread/thread/thread_ex.o 00:05:12.177 LINK vhost 00:05:12.177 LINK spdk_bdev 00:05:12.177 CC test/event/event_perf/event_perf.o 00:05:12.177 CC test/event/reactor_perf/reactor_perf.o 00:05:12.177 CC test/event/reactor/reactor.o 00:05:12.177 LINK spdk_nvme_identify 00:05:12.177 CC test/event/app_repeat/app_repeat.o 00:05:12.177 CC test/event/scheduler/scheduler.o 00:05:12.177 LINK lsvmd 00:05:12.177 LINK led 00:05:12.177 LINK hello_sock 00:05:12.177 LINK memory_ut 00:05:12.177 LINK reactor_perf 00:05:12.441 LINK event_perf 00:05:12.441 LINK reactor 00:05:12.441 LINK idxd_perf 00:05:12.441 LINK thread 00:05:12.441 LINK app_repeat 00:05:12.441 LINK scheduler 00:05:12.701 CC test/nvme/sgl/sgl.o 00:05:12.701 CC test/nvme/aer/aer.o 00:05:12.701 CC test/nvme/startup/startup.o 00:05:12.701 CC test/nvme/fdp/fdp.o 00:05:12.701 CC test/nvme/reserve/reserve.o 00:05:12.701 CC test/nvme/e2edp/nvme_dp.o 00:05:12.701 CC test/nvme/reset/reset.o 00:05:12.701 CC test/nvme/simple_copy/simple_copy.o 00:05:12.701 CC test/nvme/overhead/overhead.o 00:05:12.701 CC test/nvme/boot_partition/boot_partition.o 00:05:12.701 CC test/nvme/connect_stress/connect_stress.o 00:05:12.701 CC test/nvme/compliance/nvme_compliance.o 00:05:12.701 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:12.701 CC test/nvme/cuse/cuse.o 00:05:12.701 CC test/nvme/err_injection/err_injection.o 00:05:12.701 CC test/nvme/fused_ordering/fused_ordering.o 00:05:12.701 CC test/blobfs/mkfs/mkfs.o 00:05:12.701 CC test/accel/dif/dif.o 00:05:12.962 CC test/lvol/esnap/esnap.o 00:05:12.962 CC examples/nvme/reconnect/reconnect.o 00:05:12.962 CC examples/nvme/arbitration/arbitration.o 00:05:12.962 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:12.962 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:12.962 CC examples/nvme/hotplug/hotplug.o 00:05:12.962 CC examples/nvme/hello_world/hello_world.o 00:05:12.962 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:12.962 CC examples/nvme/abort/abort.o 00:05:12.962 LINK boot_partition 00:05:12.962 LINK startup 00:05:12.962 LINK err_injection 00:05:12.962 LINK connect_stress 00:05:12.962 LINK reserve 00:05:12.962 LINK doorbell_aers 00:05:12.962 LINK fused_ordering 00:05:12.962 LINK mkfs 00:05:12.962 LINK simple_copy 00:05:12.962 LINK reset 00:05:12.962 LINK sgl 00:05:12.962 CC examples/accel/perf/accel_perf.o 00:05:12.962 LINK aer 00:05:12.962 LINK nvme_dp 00:05:12.962 CC examples/blob/cli/blobcli.o 00:05:12.962 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:12.962 CC examples/blob/hello_world/hello_blob.o 00:05:12.962 LINK overhead 00:05:12.962 LINK iscsi_fuzz 00:05:12.962 LINK nvme_compliance 00:05:12.962 LINK fdp 00:05:13.224 LINK cmb_copy 00:05:13.224 LINK pmr_persistence 00:05:13.224 LINK hello_world 00:05:13.224 LINK hotplug 00:05:13.224 LINK reconnect 00:05:13.224 LINK arbitration 00:05:13.224 LINK abort 00:05:13.224 LINK hello_blob 00:05:13.484 LINK hello_fsdev 00:05:13.484 LINK dif 00:05:13.484 LINK nvme_manage 00:05:13.484 LINK accel_perf 00:05:13.484 LINK blobcli 00:05:14.063 LINK cuse 00:05:14.063 CC test/bdev/bdevio/bdevio.o 00:05:14.063 CC examples/bdev/hello_world/hello_bdev.o 00:05:14.063 CC examples/bdev/bdevperf/bdevperf.o 00:05:14.324 LINK hello_bdev 00:05:14.324 LINK bdevio 00:05:14.894 LINK bdevperf 00:05:15.465 CC examples/nvmf/nvmf/nvmf.o 00:05:15.727 LINK nvmf 00:05:17.643 LINK esnap 00:05:17.643 00:05:17.643 real 0m54.555s 00:05:17.643 user 6m35.485s 00:05:17.643 sys 4m16.193s 00:05:17.643 17:32:17 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:17.643 17:32:17 make -- common/autotest_common.sh@10 -- $ set +x 00:05:17.643 ************************************ 00:05:17.643 END TEST make 00:05:17.643 ************************************ 00:05:17.643 17:32:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:17.643 17:32:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:17.643 17:32:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:17.643 17:32:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.643 17:32:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:17.643 17:32:17 -- pm/common@44 -- $ pid=2315777 00:05:17.643 17:32:17 -- pm/common@50 -- $ kill -TERM 2315777 00:05:17.643 17:32:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.643 17:32:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:17.643 17:32:17 -- pm/common@44 -- $ pid=2315778 00:05:17.643 17:32:17 -- pm/common@50 -- $ kill -TERM 2315778 00:05:17.643 17:32:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.643 17:32:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:17.643 17:32:17 -- pm/common@44 -- $ pid=2315780 00:05:17.643 17:32:17 -- pm/common@50 -- $ kill -TERM 2315780 00:05:17.643 17:32:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.643 17:32:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:17.643 17:32:17 -- pm/common@44 -- $ pid=2315805 00:05:17.643 17:32:17 -- pm/common@50 -- $ sudo -E kill -TERM 2315805 00:05:17.905 17:32:17 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:17.905 17:32:17 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:17.905 17:32:17 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:17.905 17:32:17 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:17.905 17:32:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.905 17:32:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.905 17:32:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.905 17:32:17 -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.905 17:32:17 -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.905 17:32:17 -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.905 17:32:17 -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.905 17:32:17 -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.905 17:32:17 -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.905 17:32:17 -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.905 17:32:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.905 17:32:17 -- scripts/common.sh@344 -- # case "$op" in 00:05:17.905 17:32:17 -- scripts/common.sh@345 -- # : 1 00:05:17.905 17:32:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.905 17:32:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.905 17:32:17 -- scripts/common.sh@365 -- # decimal 1 00:05:17.905 17:32:17 -- scripts/common.sh@353 -- # local d=1 00:05:17.905 17:32:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.905 17:32:17 -- scripts/common.sh@355 -- # echo 1 00:05:17.905 17:32:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.905 17:32:17 -- scripts/common.sh@366 -- # decimal 2 00:05:17.905 17:32:17 -- scripts/common.sh@353 -- # local d=2 00:05:17.905 17:32:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.905 17:32:17 -- scripts/common.sh@355 -- # echo 2 00:05:17.905 17:32:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.905 17:32:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.905 17:32:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.905 17:32:17 -- scripts/common.sh@368 -- # return 0 00:05:17.905 17:32:17 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.905 17:32:17 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:17.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.905 --rc genhtml_branch_coverage=1 00:05:17.905 --rc genhtml_function_coverage=1 00:05:17.905 --rc genhtml_legend=1 00:05:17.905 --rc geninfo_all_blocks=1 00:05:17.905 --rc geninfo_unexecuted_blocks=1 00:05:17.905 00:05:17.905 ' 00:05:17.905 17:32:17 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:17.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.905 --rc genhtml_branch_coverage=1 00:05:17.905 --rc genhtml_function_coverage=1 00:05:17.905 --rc genhtml_legend=1 00:05:17.905 --rc geninfo_all_blocks=1 00:05:17.905 --rc geninfo_unexecuted_blocks=1 00:05:17.905 00:05:17.905 ' 00:05:17.905 17:32:17 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:17.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.905 --rc genhtml_branch_coverage=1 00:05:17.905 --rc genhtml_function_coverage=1 00:05:17.905 --rc genhtml_legend=1 00:05:17.905 --rc geninfo_all_blocks=1 00:05:17.905 --rc geninfo_unexecuted_blocks=1 00:05:17.905 00:05:17.905 ' 00:05:17.905 17:32:17 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:17.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.905 --rc genhtml_branch_coverage=1 00:05:17.905 --rc genhtml_function_coverage=1 00:05:17.905 --rc genhtml_legend=1 00:05:17.905 --rc geninfo_all_blocks=1 00:05:17.905 --rc geninfo_unexecuted_blocks=1 00:05:17.905 00:05:17.905 ' 00:05:17.905 17:32:17 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.905 17:32:17 -- nvmf/common.sh@7 -- # uname -s 00:05:17.905 17:32:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.905 17:32:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.905 17:32:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.905 17:32:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.905 17:32:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.905 17:32:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.905 17:32:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.905 17:32:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.905 17:32:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.905 17:32:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.905 17:32:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.905 17:32:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.905 17:32:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.905 17:32:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.906 17:32:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:17.906 17:32:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.906 17:32:17 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.906 17:32:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:17.906 17:32:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.906 17:32:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.906 17:32:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.906 17:32:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.906 17:32:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.906 17:32:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.906 17:32:17 -- paths/export.sh@5 -- # export PATH 00:05:17.906 17:32:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.906 17:32:17 -- nvmf/common.sh@51 -- # : 0 00:05:17.906 17:32:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:17.906 17:32:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:17.906 17:32:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.906 17:32:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.906 17:32:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.906 17:32:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:17.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:17.906 17:32:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:17.906 17:32:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:17.906 17:32:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:17.906 17:32:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:17.906 17:32:17 -- spdk/autotest.sh@32 -- # uname -s 00:05:17.906 17:32:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:17.906 17:32:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:17.906 17:32:17 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:17.906 17:32:17 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:17.906 17:32:17 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:17.906 17:32:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:17.906 17:32:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:17.906 17:32:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:17.906 17:32:17 -- spdk/autotest.sh@48 -- # udevadm_pid=2396937 00:05:17.906 17:32:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:17.906 17:32:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:17.906 17:32:17 -- pm/common@17 -- # local monitor 00:05:17.906 17:32:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.906 17:32:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.906 17:32:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.906 17:32:17 -- pm/common@21 -- # date +%s 00:05:17.906 17:32:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.906 17:32:17 -- pm/common@21 -- # date +%s 00:05:17.906 17:32:17 -- pm/common@25 -- # sleep 1 00:05:17.906 17:32:17 -- pm/common@21 -- # date +%s 00:05:17.906 17:32:17 -- pm/common@21 -- # date +%s 00:05:17.906 17:32:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732120337 00:05:17.906 17:32:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732120337 00:05:17.906 17:32:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732120337 00:05:17.906 17:32:17 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732120337 00:05:17.906 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732120337_collect-cpu-load.pm.log 00:05:17.906 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732120337_collect-vmstat.pm.log 00:05:17.906 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732120337_collect-cpu-temp.pm.log 00:05:17.906 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732120337_collect-bmc-pm.bmc.pm.log 00:05:18.850 17:32:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:18.850 17:32:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:18.850 17:32:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:18.850 17:32:18 -- common/autotest_common.sh@10 -- # set +x 00:05:18.850 17:32:18 -- spdk/autotest.sh@59 -- # create_test_list 00:05:18.850 17:32:18 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:18.850 17:32:18 -- common/autotest_common.sh@10 -- # set +x 00:05:19.110 17:32:18 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:19.110 17:32:18 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.110 17:32:18 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.110 17:32:18 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:19.110 17:32:18 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.110 17:32:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:19.110 17:32:18 -- common/autotest_common.sh@1455 -- # uname 00:05:19.110 17:32:18 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:19.110 17:32:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:19.110 17:32:18 -- common/autotest_common.sh@1475 -- # uname 00:05:19.110 17:32:18 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:19.110 17:32:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:19.110 17:32:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:19.110 lcov: LCOV version 1.15 00:05:19.110 17:32:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:41.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:41.088 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:49.267 17:32:48 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:49.267 17:32:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:49.267 17:32:48 -- common/autotest_common.sh@10 -- # set +x 00:05:49.267 17:32:48 -- spdk/autotest.sh@78 -- # rm -f 00:05:49.267 17:32:48 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:52.571 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:05:52.571 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:05:52.571 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:05:52.571 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:05:52.832 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:05:52.832 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:05:52.832 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:05:52.832 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:05:52.832 0000:65:00.0 (144d a80a): Already using the nvme driver 00:05:52.832 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:05:52.832 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:05:52.832 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:05:52.832 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:05:52.832 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:05:52.832 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:05:53.093 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:05:53.093 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:05:53.353 17:32:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:53.353 17:32:53 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:53.353 17:32:53 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:53.353 17:32:53 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:53.354 17:32:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:53.354 17:32:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:53.354 17:32:53 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:53.354 17:32:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:53.354 17:32:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:53.354 17:32:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:53.354 17:32:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:53.354 17:32:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:53.354 17:32:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:53.354 17:32:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:53.354 17:32:53 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:53.354 No valid GPT data, bailing 00:05:53.354 17:32:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:53.354 17:32:53 -- scripts/common.sh@394 -- # pt= 00:05:53.354 17:32:53 -- scripts/common.sh@395 -- # return 1 00:05:53.354 17:32:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:53.354 1+0 records in 00:05:53.354 1+0 records out 00:05:53.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00196432 s, 534 MB/s 00:05:53.354 17:32:53 -- spdk/autotest.sh@105 -- # sync 00:05:53.354 17:32:53 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:53.354 17:32:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:53.354 17:32:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:03.363 17:33:01 -- spdk/autotest.sh@111 -- # uname -s 00:06:03.363 17:33:01 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:03.363 17:33:01 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:03.363 17:33:01 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:05.279 Hugepages 00:06:05.279 node hugesize free / total 00:06:05.279 node0 1048576kB 0 / 0 00:06:05.279 node0 2048kB 0 / 0 00:06:05.279 node1 1048576kB 0 / 0 00:06:05.279 node1 2048kB 0 / 0 00:06:05.279 00:06:05.279 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:05.279 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:06:05.279 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:06:05.279 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:06:05.279 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:06:05.279 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:06:05.279 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:06:05.279 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:06:05.279 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:06:05.279 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:06:05.279 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:06:05.279 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:06:05.279 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:06:05.279 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:06:05.279 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:06:05.279 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:06:05.279 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:06:05.279 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:06:05.279 17:33:05 -- spdk/autotest.sh@117 -- # uname -s 00:06:05.279 17:33:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:05.279 17:33:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:05.279 17:33:05 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:09.484 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:09.484 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:09.484 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:09.484 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:09.484 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:09.484 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:09.484 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:09.484 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:09.484 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:09.484 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:09.484 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:09.484 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:09.484 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:09.484 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:09.484 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:09.484 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:10.864 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:11.124 17:33:10 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:12.063 17:33:11 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:12.063 17:33:11 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:12.063 17:33:11 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:12.063 17:33:11 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:12.063 17:33:11 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:12.063 17:33:11 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:12.063 17:33:11 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:12.063 17:33:11 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:12.063 17:33:11 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:12.323 17:33:12 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:06:12.323 17:33:12 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:06:12.323 17:33:12 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:15.622 Waiting for block devices as requested 00:06:15.622 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:15.622 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:15.884 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:15.884 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:15.884 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:16.145 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:16.145 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:16.145 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:16.406 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:06:16.406 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:16.667 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:16.667 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:16.667 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:16.928 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:16.928 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:16.928 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:17.189 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:17.450 17:33:17 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:17.450 17:33:17 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:06:17.450 17:33:17 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:06:17.450 17:33:17 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:06:17.450 17:33:17 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:17.450 17:33:17 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:06:17.450 17:33:17 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:17.450 17:33:17 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:17.451 17:33:17 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:17.451 17:33:17 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:17.451 17:33:17 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:17.451 17:33:17 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:17.451 17:33:17 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:17.451 17:33:17 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:06:17.451 17:33:17 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:17.451 17:33:17 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:17.451 17:33:17 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:17.451 17:33:17 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:17.451 17:33:17 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:17.451 17:33:17 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:17.451 17:33:17 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:17.451 17:33:17 -- common/autotest_common.sh@1541 -- # continue 00:06:17.451 17:33:17 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:17.451 17:33:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:17.451 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:06:17.451 17:33:17 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:17.451 17:33:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.451 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:06:17.451 17:33:17 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:21.657 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:21.657 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:21.657 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:21.657 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:21.657 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:21.657 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:21.657 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:21.657 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:21.657 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:21.657 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:21.657 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:21.657 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:21.657 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:21.657 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:21.657 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:21.657 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:21.657 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:21.657 17:33:21 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:21.657 17:33:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:21.657 17:33:21 -- common/autotest_common.sh@10 -- # set +x 00:06:21.657 17:33:21 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:21.657 17:33:21 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:21.657 17:33:21 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:21.657 17:33:21 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:21.657 17:33:21 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:21.657 17:33:21 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:21.657 17:33:21 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:21.657 17:33:21 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:21.657 17:33:21 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:21.657 17:33:21 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:21.657 17:33:21 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:21.657 17:33:21 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:21.657 17:33:21 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:21.657 17:33:21 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:06:21.657 17:33:21 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:06:21.657 17:33:21 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:21.657 17:33:21 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:06:21.657 17:33:21 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:06:21.657 17:33:21 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:21.657 17:33:21 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:21.657 17:33:21 -- common/autotest_common.sh@1570 -- # return 0 00:06:21.657 17:33:21 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:21.657 17:33:21 -- common/autotest_common.sh@1578 -- # return 0 00:06:21.657 17:33:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:21.657 17:33:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:21.657 17:33:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:21.657 17:33:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:21.657 17:33:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:21.657 17:33:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:21.657 17:33:21 -- common/autotest_common.sh@10 -- # set +x 00:06:21.657 17:33:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:21.657 17:33:21 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:21.657 17:33:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.657 17:33:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.657 17:33:21 -- common/autotest_common.sh@10 -- # set +x 00:06:21.657 ************************************ 00:06:21.657 START TEST env 00:06:21.657 ************************************ 00:06:21.657 17:33:21 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:21.918 * Looking for test storage... 00:06:21.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:21.918 17:33:21 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:21.918 17:33:21 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:21.918 17:33:21 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:21.918 17:33:21 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:21.918 17:33:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.918 17:33:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.918 17:33:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.918 17:33:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.918 17:33:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.918 17:33:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.918 17:33:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.918 17:33:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.918 17:33:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.918 17:33:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.918 17:33:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.918 17:33:21 env -- scripts/common.sh@344 -- # case "$op" in 00:06:21.918 17:33:21 env -- scripts/common.sh@345 -- # : 1 00:06:21.918 17:33:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.918 17:33:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.918 17:33:21 env -- scripts/common.sh@365 -- # decimal 1 00:06:21.918 17:33:21 env -- scripts/common.sh@353 -- # local d=1 00:06:21.918 17:33:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.918 17:33:21 env -- scripts/common.sh@355 -- # echo 1 00:06:21.918 17:33:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.918 17:33:21 env -- scripts/common.sh@366 -- # decimal 2 00:06:21.918 17:33:21 env -- scripts/common.sh@353 -- # local d=2 00:06:21.918 17:33:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.918 17:33:21 env -- scripts/common.sh@355 -- # echo 2 00:06:21.918 17:33:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.918 17:33:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.918 17:33:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.918 17:33:21 env -- scripts/common.sh@368 -- # return 0 00:06:21.918 17:33:21 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.918 17:33:21 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:21.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.918 --rc genhtml_branch_coverage=1 00:06:21.918 --rc genhtml_function_coverage=1 00:06:21.918 --rc genhtml_legend=1 00:06:21.918 --rc geninfo_all_blocks=1 00:06:21.918 --rc geninfo_unexecuted_blocks=1 00:06:21.918 00:06:21.918 ' 00:06:21.918 17:33:21 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:21.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.918 --rc genhtml_branch_coverage=1 00:06:21.918 --rc genhtml_function_coverage=1 00:06:21.918 --rc genhtml_legend=1 00:06:21.918 --rc geninfo_all_blocks=1 00:06:21.918 --rc geninfo_unexecuted_blocks=1 00:06:21.918 00:06:21.918 ' 00:06:21.918 17:33:21 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:21.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.918 --rc genhtml_branch_coverage=1 00:06:21.918 --rc genhtml_function_coverage=1 00:06:21.918 --rc genhtml_legend=1 00:06:21.918 --rc geninfo_all_blocks=1 00:06:21.918 --rc geninfo_unexecuted_blocks=1 00:06:21.918 00:06:21.918 ' 00:06:21.918 17:33:21 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:21.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.918 --rc genhtml_branch_coverage=1 00:06:21.918 --rc genhtml_function_coverage=1 00:06:21.918 --rc genhtml_legend=1 00:06:21.918 --rc geninfo_all_blocks=1 00:06:21.918 --rc geninfo_unexecuted_blocks=1 00:06:21.918 00:06:21.918 ' 00:06:21.918 17:33:21 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:21.918 17:33:21 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.918 17:33:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.918 17:33:21 env -- common/autotest_common.sh@10 -- # set +x 00:06:21.918 ************************************ 00:06:21.918 START TEST env_memory 00:06:21.918 ************************************ 00:06:21.918 17:33:21 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:21.918 00:06:21.918 00:06:21.918 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.918 http://cunit.sourceforge.net/ 00:06:21.918 00:06:21.918 00:06:21.918 Suite: memory 00:06:22.180 Test: alloc and free memory map ...[2024-11-20 17:33:21.859746] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:22.180 passed 00:06:22.180 Test: mem map translation ...[2024-11-20 17:33:21.885309] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:22.180 [2024-11-20 17:33:21.885337] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:22.180 [2024-11-20 17:33:21.885390] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:22.180 [2024-11-20 17:33:21.885398] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:22.180 passed 00:06:22.180 Test: mem map registration ...[2024-11-20 17:33:21.940550] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:22.180 [2024-11-20 17:33:21.940584] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:22.180 passed 00:06:22.180 Test: mem map adjacent registrations ...passed 00:06:22.180 00:06:22.180 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.180 suites 1 1 n/a 0 0 00:06:22.180 tests 4 4 4 0 0 00:06:22.180 asserts 152 152 152 0 n/a 00:06:22.180 00:06:22.180 Elapsed time = 0.193 seconds 00:06:22.180 00:06:22.180 real 0m0.208s 00:06:22.180 user 0m0.195s 00:06:22.180 sys 0m0.012s 00:06:22.180 17:33:22 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.180 17:33:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:22.180 ************************************ 00:06:22.180 END TEST env_memory 00:06:22.180 ************************************ 00:06:22.180 17:33:22 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:22.180 17:33:22 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.180 17:33:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.180 17:33:22 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.441 ************************************ 00:06:22.441 START TEST env_vtophys 00:06:22.441 ************************************ 00:06:22.441 17:33:22 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:22.441 EAL: lib.eal log level changed from notice to debug 00:06:22.441 EAL: Detected lcore 0 as core 0 on socket 0 00:06:22.441 EAL: Detected lcore 1 as core 1 on socket 0 00:06:22.441 EAL: Detected lcore 2 as core 2 on socket 0 00:06:22.441 EAL: Detected lcore 3 as core 3 on socket 0 00:06:22.441 EAL: Detected lcore 4 as core 4 on socket 0 00:06:22.441 EAL: Detected lcore 5 as core 5 on socket 0 00:06:22.441 EAL: Detected lcore 6 as core 6 on socket 0 00:06:22.441 EAL: Detected lcore 7 as core 7 on socket 0 00:06:22.441 EAL: Detected lcore 8 as core 8 on socket 0 00:06:22.441 EAL: Detected lcore 9 as core 9 on socket 0 00:06:22.441 EAL: Detected lcore 10 as core 10 on socket 0 00:06:22.441 EAL: Detected lcore 11 as core 11 on socket 0 00:06:22.441 EAL: Detected lcore 12 as core 12 on socket 0 00:06:22.441 EAL: Detected lcore 13 as core 13 on socket 0 00:06:22.441 EAL: Detected lcore 14 as core 14 on socket 0 00:06:22.441 EAL: Detected lcore 15 as core 15 on socket 0 00:06:22.441 EAL: Detected lcore 16 as core 16 on socket 0 00:06:22.441 EAL: Detected lcore 17 as core 17 on socket 0 00:06:22.441 EAL: Detected lcore 18 as core 18 on socket 0 00:06:22.441 EAL: Detected lcore 19 as core 19 on socket 0 00:06:22.441 EAL: Detected lcore 20 as core 20 on socket 0 00:06:22.441 EAL: Detected lcore 21 as core 21 on socket 0 00:06:22.441 EAL: Detected lcore 22 as core 22 on socket 0 00:06:22.441 EAL: Detected lcore 23 as core 23 on socket 0 00:06:22.441 EAL: Detected lcore 24 as core 24 on socket 0 00:06:22.441 EAL: Detected lcore 25 as core 25 on socket 0 00:06:22.441 EAL: Detected lcore 26 as core 26 on socket 0 00:06:22.441 EAL: Detected lcore 27 as core 27 on socket 0 00:06:22.441 EAL: Detected lcore 28 as core 28 on socket 0 00:06:22.441 EAL: Detected lcore 29 as core 29 on socket 0 00:06:22.441 EAL: Detected lcore 30 as core 30 on socket 0 00:06:22.441 EAL: Detected lcore 31 as core 31 on socket 0 00:06:22.441 EAL: Detected lcore 32 as core 32 on socket 0 00:06:22.441 EAL: Detected lcore 33 as core 33 on socket 0 00:06:22.441 EAL: Detected lcore 34 as core 34 on socket 0 00:06:22.441 EAL: Detected lcore 35 as core 35 on socket 0 00:06:22.441 EAL: Detected lcore 36 as core 0 on socket 1 00:06:22.441 EAL: Detected lcore 37 as core 1 on socket 1 00:06:22.441 EAL: Detected lcore 38 as core 2 on socket 1 00:06:22.441 EAL: Detected lcore 39 as core 3 on socket 1 00:06:22.441 EAL: Detected lcore 40 as core 4 on socket 1 00:06:22.441 EAL: Detected lcore 41 as core 5 on socket 1 00:06:22.441 EAL: Detected lcore 42 as core 6 on socket 1 00:06:22.441 EAL: Detected lcore 43 as core 7 on socket 1 00:06:22.441 EAL: Detected lcore 44 as core 8 on socket 1 00:06:22.441 EAL: Detected lcore 45 as core 9 on socket 1 00:06:22.441 EAL: Detected lcore 46 as core 10 on socket 1 00:06:22.441 EAL: Detected lcore 47 as core 11 on socket 1 00:06:22.441 EAL: Detected lcore 48 as core 12 on socket 1 00:06:22.441 EAL: Detected lcore 49 as core 13 on socket 1 00:06:22.441 EAL: Detected lcore 50 as core 14 on socket 1 00:06:22.441 EAL: Detected lcore 51 as core 15 on socket 1 00:06:22.441 EAL: Detected lcore 52 as core 16 on socket 1 00:06:22.441 EAL: Detected lcore 53 as core 17 on socket 1 00:06:22.441 EAL: Detected lcore 54 as core 18 on socket 1 00:06:22.441 EAL: Detected lcore 55 as core 19 on socket 1 00:06:22.441 EAL: Detected lcore 56 as core 20 on socket 1 00:06:22.441 EAL: Detected lcore 57 as core 21 on socket 1 00:06:22.441 EAL: Detected lcore 58 as core 22 on socket 1 00:06:22.441 EAL: Detected lcore 59 as core 23 on socket 1 00:06:22.441 EAL: Detected lcore 60 as core 24 on socket 1 00:06:22.441 EAL: Detected lcore 61 as core 25 on socket 1 00:06:22.441 EAL: Detected lcore 62 as core 26 on socket 1 00:06:22.441 EAL: Detected lcore 63 as core 27 on socket 1 00:06:22.441 EAL: Detected lcore 64 as core 28 on socket 1 00:06:22.441 EAL: Detected lcore 65 as core 29 on socket 1 00:06:22.441 EAL: Detected lcore 66 as core 30 on socket 1 00:06:22.441 EAL: Detected lcore 67 as core 31 on socket 1 00:06:22.441 EAL: Detected lcore 68 as core 32 on socket 1 00:06:22.441 EAL: Detected lcore 69 as core 33 on socket 1 00:06:22.441 EAL: Detected lcore 70 as core 34 on socket 1 00:06:22.441 EAL: Detected lcore 71 as core 35 on socket 1 00:06:22.441 EAL: Detected lcore 72 as core 0 on socket 0 00:06:22.441 EAL: Detected lcore 73 as core 1 on socket 0 00:06:22.441 EAL: Detected lcore 74 as core 2 on socket 0 00:06:22.441 EAL: Detected lcore 75 as core 3 on socket 0 00:06:22.441 EAL: Detected lcore 76 as core 4 on socket 0 00:06:22.441 EAL: Detected lcore 77 as core 5 on socket 0 00:06:22.441 EAL: Detected lcore 78 as core 6 on socket 0 00:06:22.441 EAL: Detected lcore 79 as core 7 on socket 0 00:06:22.441 EAL: Detected lcore 80 as core 8 on socket 0 00:06:22.441 EAL: Detected lcore 81 as core 9 on socket 0 00:06:22.441 EAL: Detected lcore 82 as core 10 on socket 0 00:06:22.441 EAL: Detected lcore 83 as core 11 on socket 0 00:06:22.441 EAL: Detected lcore 84 as core 12 on socket 0 00:06:22.441 EAL: Detected lcore 85 as core 13 on socket 0 00:06:22.441 EAL: Detected lcore 86 as core 14 on socket 0 00:06:22.441 EAL: Detected lcore 87 as core 15 on socket 0 00:06:22.441 EAL: Detected lcore 88 as core 16 on socket 0 00:06:22.441 EAL: Detected lcore 89 as core 17 on socket 0 00:06:22.441 EAL: Detected lcore 90 as core 18 on socket 0 00:06:22.441 EAL: Detected lcore 91 as core 19 on socket 0 00:06:22.441 EAL: Detected lcore 92 as core 20 on socket 0 00:06:22.441 EAL: Detected lcore 93 as core 21 on socket 0 00:06:22.441 EAL: Detected lcore 94 as core 22 on socket 0 00:06:22.441 EAL: Detected lcore 95 as core 23 on socket 0 00:06:22.441 EAL: Detected lcore 96 as core 24 on socket 0 00:06:22.441 EAL: Detected lcore 97 as core 25 on socket 0 00:06:22.441 EAL: Detected lcore 98 as core 26 on socket 0 00:06:22.441 EAL: Detected lcore 99 as core 27 on socket 0 00:06:22.441 EAL: Detected lcore 100 as core 28 on socket 0 00:06:22.441 EAL: Detected lcore 101 as core 29 on socket 0 00:06:22.441 EAL: Detected lcore 102 as core 30 on socket 0 00:06:22.441 EAL: Detected lcore 103 as core 31 on socket 0 00:06:22.441 EAL: Detected lcore 104 as core 32 on socket 0 00:06:22.441 EAL: Detected lcore 105 as core 33 on socket 0 00:06:22.441 EAL: Detected lcore 106 as core 34 on socket 0 00:06:22.441 EAL: Detected lcore 107 as core 35 on socket 0 00:06:22.441 EAL: Detected lcore 108 as core 0 on socket 1 00:06:22.441 EAL: Detected lcore 109 as core 1 on socket 1 00:06:22.441 EAL: Detected lcore 110 as core 2 on socket 1 00:06:22.441 EAL: Detected lcore 111 as core 3 on socket 1 00:06:22.441 EAL: Detected lcore 112 as core 4 on socket 1 00:06:22.441 EAL: Detected lcore 113 as core 5 on socket 1 00:06:22.441 EAL: Detected lcore 114 as core 6 on socket 1 00:06:22.441 EAL: Detected lcore 115 as core 7 on socket 1 00:06:22.441 EAL: Detected lcore 116 as core 8 on socket 1 00:06:22.441 EAL: Detected lcore 117 as core 9 on socket 1 00:06:22.441 EAL: Detected lcore 118 as core 10 on socket 1 00:06:22.441 EAL: Detected lcore 119 as core 11 on socket 1 00:06:22.441 EAL: Detected lcore 120 as core 12 on socket 1 00:06:22.441 EAL: Detected lcore 121 as core 13 on socket 1 00:06:22.441 EAL: Detected lcore 122 as core 14 on socket 1 00:06:22.441 EAL: Detected lcore 123 as core 15 on socket 1 00:06:22.441 EAL: Detected lcore 124 as core 16 on socket 1 00:06:22.441 EAL: Detected lcore 125 as core 17 on socket 1 00:06:22.441 EAL: Detected lcore 126 as core 18 on socket 1 00:06:22.441 EAL: Detected lcore 127 as core 19 on socket 1 00:06:22.441 EAL: Skipped lcore 128 as core 20 on socket 1 00:06:22.441 EAL: Skipped lcore 129 as core 21 on socket 1 00:06:22.441 EAL: Skipped lcore 130 as core 22 on socket 1 00:06:22.441 EAL: Skipped lcore 131 as core 23 on socket 1 00:06:22.441 EAL: Skipped lcore 132 as core 24 on socket 1 00:06:22.441 EAL: Skipped lcore 133 as core 25 on socket 1 00:06:22.441 EAL: Skipped lcore 134 as core 26 on socket 1 00:06:22.441 EAL: Skipped lcore 135 as core 27 on socket 1 00:06:22.441 EAL: Skipped lcore 136 as core 28 on socket 1 00:06:22.441 EAL: Skipped lcore 137 as core 29 on socket 1 00:06:22.441 EAL: Skipped lcore 138 as core 30 on socket 1 00:06:22.441 EAL: Skipped lcore 139 as core 31 on socket 1 00:06:22.441 EAL: Skipped lcore 140 as core 32 on socket 1 00:06:22.441 EAL: Skipped lcore 141 as core 33 on socket 1 00:06:22.441 EAL: Skipped lcore 142 as core 34 on socket 1 00:06:22.441 EAL: Skipped lcore 143 as core 35 on socket 1 00:06:22.441 EAL: Maximum logical cores by configuration: 128 00:06:22.441 EAL: Detected CPU lcores: 128 00:06:22.441 EAL: Detected NUMA nodes: 2 00:06:22.441 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:22.441 EAL: Detected shared linkage of DPDK 00:06:22.441 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:22.441 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:22.441 EAL: Registered [vdev] bus. 00:06:22.441 EAL: bus.vdev log level changed from disabled to notice 00:06:22.441 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:22.442 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:22.442 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:22.442 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:22.442 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:22.442 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:22.442 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:22.442 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:22.442 EAL: No shared files mode enabled, IPC will be disabled 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: Bus pci wants IOVA as 'DC' 00:06:22.442 EAL: Bus vdev wants IOVA as 'DC' 00:06:22.442 EAL: Buses did not request a specific IOVA mode. 00:06:22.442 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:22.442 EAL: Selected IOVA mode 'VA' 00:06:22.442 EAL: Probing VFIO support... 00:06:22.442 EAL: IOMMU type 1 (Type 1) is supported 00:06:22.442 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:22.442 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:22.442 EAL: VFIO support initialized 00:06:22.442 EAL: Ask a virtual area of 0x2e000 bytes 00:06:22.442 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:22.442 EAL: Setting up physically contiguous memory... 00:06:22.442 EAL: Setting maximum number of open files to 524288 00:06:22.442 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:22.442 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:22.442 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:22.442 EAL: Ask a virtual area of 0x61000 bytes 00:06:22.442 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:22.442 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:22.442 EAL: Ask a virtual area of 0x400000000 bytes 00:06:22.442 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:22.442 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:22.442 EAL: Ask a virtual area of 0x61000 bytes 00:06:22.442 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:22.442 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:22.442 EAL: Ask a virtual area of 0x400000000 bytes 00:06:22.442 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:22.442 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:22.442 EAL: Ask a virtual area of 0x61000 bytes 00:06:22.442 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:22.442 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:22.442 EAL: Ask a virtual area of 0x400000000 bytes 00:06:22.442 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:22.442 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:22.442 EAL: Ask a virtual area of 0x61000 bytes 00:06:22.442 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:22.442 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:22.442 EAL: Ask a virtual area of 0x400000000 bytes 00:06:22.442 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:22.442 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:22.442 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:22.442 EAL: Ask a virtual area of 0x61000 bytes 00:06:22.442 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:22.442 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:22.442 EAL: Ask a virtual area of 0x400000000 bytes 00:06:22.442 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:22.442 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:22.442 EAL: Ask a virtual area of 0x61000 bytes 00:06:22.442 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:22.442 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:22.442 EAL: Ask a virtual area of 0x400000000 bytes 00:06:22.442 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:22.442 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:22.442 EAL: Ask a virtual area of 0x61000 bytes 00:06:22.442 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:22.442 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:22.442 EAL: Ask a virtual area of 0x400000000 bytes 00:06:22.442 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:22.442 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:22.442 EAL: Ask a virtual area of 0x61000 bytes 00:06:22.442 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:22.442 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:22.442 EAL: Ask a virtual area of 0x400000000 bytes 00:06:22.442 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:22.442 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:22.442 EAL: Hugepages will be freed exactly as allocated. 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: TSC frequency is ~2400000 KHz 00:06:22.442 EAL: Main lcore 0 is ready (tid=7f7a4be11a00;cpuset=[0]) 00:06:22.442 EAL: Trying to obtain current memory policy. 00:06:22.442 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.442 EAL: Restoring previous memory policy: 0 00:06:22.442 EAL: request: mp_malloc_sync 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: Heap on socket 0 was expanded by 2MB 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:22.442 EAL: Mem event callback 'spdk:(nil)' registered 00:06:22.442 00:06:22.442 00:06:22.442 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.442 http://cunit.sourceforge.net/ 00:06:22.442 00:06:22.442 00:06:22.442 Suite: components_suite 00:06:22.442 Test: vtophys_malloc_test ...passed 00:06:22.442 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:22.442 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.442 EAL: Restoring previous memory policy: 4 00:06:22.442 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.442 EAL: request: mp_malloc_sync 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: Heap on socket 0 was expanded by 4MB 00:06:22.442 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.442 EAL: request: mp_malloc_sync 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: Heap on socket 0 was shrunk by 4MB 00:06:22.442 EAL: Trying to obtain current memory policy. 00:06:22.442 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.442 EAL: Restoring previous memory policy: 4 00:06:22.442 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.442 EAL: request: mp_malloc_sync 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: Heap on socket 0 was expanded by 6MB 00:06:22.442 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.442 EAL: request: mp_malloc_sync 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: Heap on socket 0 was shrunk by 6MB 00:06:22.442 EAL: Trying to obtain current memory policy. 00:06:22.442 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.442 EAL: Restoring previous memory policy: 4 00:06:22.442 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.442 EAL: request: mp_malloc_sync 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: Heap on socket 0 was expanded by 10MB 00:06:22.442 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.442 EAL: request: mp_malloc_sync 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: Heap on socket 0 was shrunk by 10MB 00:06:22.442 EAL: Trying to obtain current memory policy. 00:06:22.442 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.442 EAL: Restoring previous memory policy: 4 00:06:22.442 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.442 EAL: request: mp_malloc_sync 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: Heap on socket 0 was expanded by 18MB 00:06:22.442 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.442 EAL: request: mp_malloc_sync 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: Heap on socket 0 was shrunk by 18MB 00:06:22.442 EAL: Trying to obtain current memory policy. 00:06:22.442 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.442 EAL: Restoring previous memory policy: 4 00:06:22.442 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.442 EAL: request: mp_malloc_sync 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: Heap on socket 0 was expanded by 34MB 00:06:22.442 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.442 EAL: request: mp_malloc_sync 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: Heap on socket 0 was shrunk by 34MB 00:06:22.442 EAL: Trying to obtain current memory policy. 00:06:22.442 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.442 EAL: Restoring previous memory policy: 4 00:06:22.442 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.442 EAL: request: mp_malloc_sync 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: Heap on socket 0 was expanded by 66MB 00:06:22.442 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.442 EAL: request: mp_malloc_sync 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: Heap on socket 0 was shrunk by 66MB 00:06:22.442 EAL: Trying to obtain current memory policy. 00:06:22.442 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.442 EAL: Restoring previous memory policy: 4 00:06:22.442 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.442 EAL: request: mp_malloc_sync 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: Heap on socket 0 was expanded by 130MB 00:06:22.442 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.442 EAL: request: mp_malloc_sync 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: Heap on socket 0 was shrunk by 130MB 00:06:22.442 EAL: Trying to obtain current memory policy. 00:06:22.442 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.442 EAL: Restoring previous memory policy: 4 00:06:22.442 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.442 EAL: request: mp_malloc_sync 00:06:22.442 EAL: No shared files mode enabled, IPC is disabled 00:06:22.442 EAL: Heap on socket 0 was expanded by 258MB 00:06:22.702 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.702 EAL: request: mp_malloc_sync 00:06:22.702 EAL: No shared files mode enabled, IPC is disabled 00:06:22.702 EAL: Heap on socket 0 was shrunk by 258MB 00:06:22.702 EAL: Trying to obtain current memory policy. 00:06:22.702 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.702 EAL: Restoring previous memory policy: 4 00:06:22.702 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.702 EAL: request: mp_malloc_sync 00:06:22.702 EAL: No shared files mode enabled, IPC is disabled 00:06:22.702 EAL: Heap on socket 0 was expanded by 514MB 00:06:22.702 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.702 EAL: request: mp_malloc_sync 00:06:22.702 EAL: No shared files mode enabled, IPC is disabled 00:06:22.702 EAL: Heap on socket 0 was shrunk by 514MB 00:06:22.703 EAL: Trying to obtain current memory policy. 00:06:22.703 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.961 EAL: Restoring previous memory policy: 4 00:06:22.961 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.961 EAL: request: mp_malloc_sync 00:06:22.961 EAL: No shared files mode enabled, IPC is disabled 00:06:22.961 EAL: Heap on socket 0 was expanded by 1026MB 00:06:22.961 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.221 EAL: request: mp_malloc_sync 00:06:23.221 EAL: No shared files mode enabled, IPC is disabled 00:06:23.221 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:23.221 passed 00:06:23.221 00:06:23.221 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.221 suites 1 1 n/a 0 0 00:06:23.221 tests 2 2 2 0 0 00:06:23.221 asserts 497 497 497 0 n/a 00:06:23.221 00:06:23.221 Elapsed time = 0.687 seconds 00:06:23.221 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.221 EAL: request: mp_malloc_sync 00:06:23.221 EAL: No shared files mode enabled, IPC is disabled 00:06:23.221 EAL: Heap on socket 0 was shrunk by 2MB 00:06:23.221 EAL: No shared files mode enabled, IPC is disabled 00:06:23.221 EAL: No shared files mode enabled, IPC is disabled 00:06:23.221 EAL: No shared files mode enabled, IPC is disabled 00:06:23.221 00:06:23.221 real 0m0.824s 00:06:23.221 user 0m0.428s 00:06:23.221 sys 0m0.368s 00:06:23.221 17:33:22 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.221 17:33:22 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:23.221 ************************************ 00:06:23.221 END TEST env_vtophys 00:06:23.221 ************************************ 00:06:23.221 17:33:22 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:23.221 17:33:22 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.221 17:33:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.221 17:33:22 env -- common/autotest_common.sh@10 -- # set +x 00:06:23.221 ************************************ 00:06:23.221 START TEST env_pci 00:06:23.221 ************************************ 00:06:23.221 17:33:22 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:23.221 00:06:23.221 00:06:23.221 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.221 http://cunit.sourceforge.net/ 00:06:23.221 00:06:23.221 00:06:23.221 Suite: pci 00:06:23.221 Test: pci_hook ...[2024-11-20 17:33:23.011627] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2417043 has claimed it 00:06:23.221 EAL: Cannot find device (10000:00:01.0) 00:06:23.221 EAL: Failed to attach device on primary process 00:06:23.221 passed 00:06:23.221 00:06:23.221 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.221 suites 1 1 n/a 0 0 00:06:23.221 tests 1 1 1 0 0 00:06:23.221 asserts 25 25 25 0 n/a 00:06:23.221 00:06:23.221 Elapsed time = 0.032 seconds 00:06:23.221 00:06:23.221 real 0m0.051s 00:06:23.221 user 0m0.015s 00:06:23.221 sys 0m0.036s 00:06:23.221 17:33:23 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.221 17:33:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:23.221 ************************************ 00:06:23.221 END TEST env_pci 00:06:23.221 ************************************ 00:06:23.221 17:33:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:23.221 17:33:23 env -- env/env.sh@15 -- # uname 00:06:23.221 17:33:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:23.221 17:33:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:23.221 17:33:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:23.221 17:33:23 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:23.221 17:33:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.221 17:33:23 env -- common/autotest_common.sh@10 -- # set +x 00:06:23.481 ************************************ 00:06:23.481 START TEST env_dpdk_post_init 00:06:23.481 ************************************ 00:06:23.481 17:33:23 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:23.481 EAL: Detected CPU lcores: 128 00:06:23.481 EAL: Detected NUMA nodes: 2 00:06:23.481 EAL: Detected shared linkage of DPDK 00:06:23.481 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:23.481 EAL: Selected IOVA mode 'VA' 00:06:23.481 EAL: VFIO support initialized 00:06:23.481 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:23.481 EAL: Using IOMMU type 1 (Type 1) 00:06:23.481 EAL: Ignore mapping IO port bar(1) 00:06:23.741 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:06:23.741 EAL: Ignore mapping IO port bar(1) 00:06:24.003 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:24.003 EAL: Ignore mapping IO port bar(1) 00:06:24.264 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:24.264 EAL: Ignore mapping IO port bar(1) 00:06:24.264 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:24.524 EAL: Ignore mapping IO port bar(1) 00:06:24.524 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:06:24.785 EAL: Ignore mapping IO port bar(1) 00:06:24.785 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:06:25.046 EAL: Ignore mapping IO port bar(1) 00:06:25.046 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:06:25.046 EAL: Ignore mapping IO port bar(1) 00:06:25.306 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:06:25.567 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:06:25.567 EAL: Ignore mapping IO port bar(1) 00:06:25.827 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:06:25.827 EAL: Ignore mapping IO port bar(1) 00:06:25.827 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:06:26.088 EAL: Ignore mapping IO port bar(1) 00:06:26.088 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:06:26.349 EAL: Ignore mapping IO port bar(1) 00:06:26.349 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:06:26.609 EAL: Ignore mapping IO port bar(1) 00:06:26.609 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:06:26.609 EAL: Ignore mapping IO port bar(1) 00:06:26.871 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:06:26.871 EAL: Ignore mapping IO port bar(1) 00:06:27.132 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:06:27.132 EAL: Ignore mapping IO port bar(1) 00:06:27.392 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:06:27.392 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:06:27.392 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:06:27.392 Starting DPDK initialization... 00:06:27.392 Starting SPDK post initialization... 00:06:27.392 SPDK NVMe probe 00:06:27.392 Attaching to 0000:65:00.0 00:06:27.392 Attached to 0000:65:00.0 00:06:27.392 Cleaning up... 00:06:29.306 00:06:29.306 real 0m5.731s 00:06:29.306 user 0m0.183s 00:06:29.306 sys 0m0.101s 00:06:29.306 17:33:28 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.306 17:33:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:29.306 ************************************ 00:06:29.306 END TEST env_dpdk_post_init 00:06:29.306 ************************************ 00:06:29.306 17:33:28 env -- env/env.sh@26 -- # uname 00:06:29.306 17:33:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:29.306 17:33:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:29.306 17:33:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.306 17:33:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.306 17:33:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:29.306 ************************************ 00:06:29.306 START TEST env_mem_callbacks 00:06:29.306 ************************************ 00:06:29.306 17:33:28 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:29.306 EAL: Detected CPU lcores: 128 00:06:29.306 EAL: Detected NUMA nodes: 2 00:06:29.306 EAL: Detected shared linkage of DPDK 00:06:29.306 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:29.306 EAL: Selected IOVA mode 'VA' 00:06:29.306 EAL: VFIO support initialized 00:06:29.306 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:29.306 00:06:29.306 00:06:29.306 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.306 http://cunit.sourceforge.net/ 00:06:29.306 00:06:29.306 00:06:29.306 Suite: memory 00:06:29.306 Test: test ... 00:06:29.306 register 0x200000200000 2097152 00:06:29.306 malloc 3145728 00:06:29.306 register 0x200000400000 4194304 00:06:29.306 buf 0x200000500000 len 3145728 PASSED 00:06:29.306 malloc 64 00:06:29.306 buf 0x2000004fff40 len 64 PASSED 00:06:29.306 malloc 4194304 00:06:29.306 register 0x200000800000 6291456 00:06:29.306 buf 0x200000a00000 len 4194304 PASSED 00:06:29.306 free 0x200000500000 3145728 00:06:29.306 free 0x2000004fff40 64 00:06:29.306 unregister 0x200000400000 4194304 PASSED 00:06:29.306 free 0x200000a00000 4194304 00:06:29.306 unregister 0x200000800000 6291456 PASSED 00:06:29.306 malloc 8388608 00:06:29.306 register 0x200000400000 10485760 00:06:29.306 buf 0x200000600000 len 8388608 PASSED 00:06:29.306 free 0x200000600000 8388608 00:06:29.306 unregister 0x200000400000 10485760 PASSED 00:06:29.306 passed 00:06:29.306 00:06:29.306 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.306 suites 1 1 n/a 0 0 00:06:29.306 tests 1 1 1 0 0 00:06:29.306 asserts 15 15 15 0 n/a 00:06:29.306 00:06:29.306 Elapsed time = 0.010 seconds 00:06:29.306 00:06:29.306 real 0m0.069s 00:06:29.306 user 0m0.018s 00:06:29.306 sys 0m0.050s 00:06:29.306 17:33:29 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.306 17:33:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:29.306 ************************************ 00:06:29.306 END TEST env_mem_callbacks 00:06:29.306 ************************************ 00:06:29.306 00:06:29.306 real 0m7.496s 00:06:29.306 user 0m1.101s 00:06:29.306 sys 0m0.956s 00:06:29.306 17:33:29 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.306 17:33:29 env -- common/autotest_common.sh@10 -- # set +x 00:06:29.306 ************************************ 00:06:29.306 END TEST env 00:06:29.306 ************************************ 00:06:29.306 17:33:29 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:29.306 17:33:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.306 17:33:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.306 17:33:29 -- common/autotest_common.sh@10 -- # set +x 00:06:29.306 ************************************ 00:06:29.306 START TEST rpc 00:06:29.306 ************************************ 00:06:29.306 17:33:29 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:29.566 * Looking for test storage... 00:06:29.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:29.566 17:33:29 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:29.566 17:33:29 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:29.566 17:33:29 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:29.566 17:33:29 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:29.566 17:33:29 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.566 17:33:29 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.566 17:33:29 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.566 17:33:29 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.566 17:33:29 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.566 17:33:29 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.566 17:33:29 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.566 17:33:29 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.566 17:33:29 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.566 17:33:29 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.566 17:33:29 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.566 17:33:29 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:29.566 17:33:29 rpc -- scripts/common.sh@345 -- # : 1 00:06:29.566 17:33:29 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.566 17:33:29 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.566 17:33:29 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:29.566 17:33:29 rpc -- scripts/common.sh@353 -- # local d=1 00:06:29.566 17:33:29 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.566 17:33:29 rpc -- scripts/common.sh@355 -- # echo 1 00:06:29.566 17:33:29 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.566 17:33:29 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:29.566 17:33:29 rpc -- scripts/common.sh@353 -- # local d=2 00:06:29.566 17:33:29 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.566 17:33:29 rpc -- scripts/common.sh@355 -- # echo 2 00:06:29.566 17:33:29 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.566 17:33:29 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.566 17:33:29 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.566 17:33:29 rpc -- scripts/common.sh@368 -- # return 0 00:06:29.566 17:33:29 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.566 17:33:29 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:29.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.566 --rc genhtml_branch_coverage=1 00:06:29.566 --rc genhtml_function_coverage=1 00:06:29.566 --rc genhtml_legend=1 00:06:29.566 --rc geninfo_all_blocks=1 00:06:29.566 --rc geninfo_unexecuted_blocks=1 00:06:29.566 00:06:29.566 ' 00:06:29.566 17:33:29 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:29.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.566 --rc genhtml_branch_coverage=1 00:06:29.566 --rc genhtml_function_coverage=1 00:06:29.566 --rc genhtml_legend=1 00:06:29.566 --rc geninfo_all_blocks=1 00:06:29.566 --rc geninfo_unexecuted_blocks=1 00:06:29.566 00:06:29.566 ' 00:06:29.566 17:33:29 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:29.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.566 --rc genhtml_branch_coverage=1 00:06:29.566 --rc genhtml_function_coverage=1 00:06:29.566 --rc genhtml_legend=1 00:06:29.566 --rc geninfo_all_blocks=1 00:06:29.566 --rc geninfo_unexecuted_blocks=1 00:06:29.566 00:06:29.566 ' 00:06:29.566 17:33:29 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:29.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.566 --rc genhtml_branch_coverage=1 00:06:29.566 --rc genhtml_function_coverage=1 00:06:29.566 --rc genhtml_legend=1 00:06:29.566 --rc geninfo_all_blocks=1 00:06:29.566 --rc geninfo_unexecuted_blocks=1 00:06:29.566 00:06:29.566 ' 00:06:29.566 17:33:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2418376 00:06:29.566 17:33:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.566 17:33:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2418376 00:06:29.566 17:33:29 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:29.566 17:33:29 rpc -- common/autotest_common.sh@831 -- # '[' -z 2418376 ']' 00:06:29.566 17:33:29 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.566 17:33:29 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.566 17:33:29 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.566 17:33:29 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.566 17:33:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.566 [2024-11-20 17:33:29.414739] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:29.566 [2024-11-20 17:33:29.414815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2418376 ] 00:06:29.826 [2024-11-20 17:33:29.495954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.826 [2024-11-20 17:33:29.544564] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:29.826 [2024-11-20 17:33:29.544616] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2418376' to capture a snapshot of events at runtime. 00:06:29.826 [2024-11-20 17:33:29.544628] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:29.826 [2024-11-20 17:33:29.544639] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:29.826 [2024-11-20 17:33:29.544647] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2418376 for offline analysis/debug. 00:06:29.826 [2024-11-20 17:33:29.544675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.399 17:33:30 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.399 17:33:30 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:30.399 17:33:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:30.399 17:33:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:30.399 17:33:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:30.399 17:33:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:30.399 17:33:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.399 17:33:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.399 17:33:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.399 ************************************ 00:06:30.399 START TEST rpc_integrity 00:06:30.399 ************************************ 00:06:30.399 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:30.399 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:30.399 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.399 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.399 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.399 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:30.399 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:30.660 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:30.660 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:30.660 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.660 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.660 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.660 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:30.660 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:30.660 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.660 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.660 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.660 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:30.660 { 00:06:30.660 "name": "Malloc0", 00:06:30.660 "aliases": [ 00:06:30.660 "dde5a060-5a2c-49ed-8bdc-fe3aec898d6b" 00:06:30.660 ], 00:06:30.660 "product_name": "Malloc disk", 00:06:30.660 "block_size": 512, 00:06:30.660 "num_blocks": 16384, 00:06:30.660 "uuid": "dde5a060-5a2c-49ed-8bdc-fe3aec898d6b", 00:06:30.660 "assigned_rate_limits": { 00:06:30.660 "rw_ios_per_sec": 0, 00:06:30.660 "rw_mbytes_per_sec": 0, 00:06:30.660 "r_mbytes_per_sec": 0, 00:06:30.660 "w_mbytes_per_sec": 0 00:06:30.660 }, 00:06:30.660 "claimed": false, 00:06:30.660 "zoned": false, 00:06:30.660 "supported_io_types": { 00:06:30.660 "read": true, 00:06:30.660 "write": true, 00:06:30.660 "unmap": true, 00:06:30.660 "flush": true, 00:06:30.660 "reset": true, 00:06:30.660 "nvme_admin": false, 00:06:30.660 "nvme_io": false, 00:06:30.660 "nvme_io_md": false, 00:06:30.660 "write_zeroes": true, 00:06:30.660 "zcopy": true, 00:06:30.660 "get_zone_info": false, 00:06:30.660 "zone_management": false, 00:06:30.660 "zone_append": false, 00:06:30.660 "compare": false, 00:06:30.660 "compare_and_write": false, 00:06:30.660 "abort": true, 00:06:30.660 "seek_hole": false, 00:06:30.660 "seek_data": false, 00:06:30.660 "copy": true, 00:06:30.660 "nvme_iov_md": false 00:06:30.660 }, 00:06:30.660 "memory_domains": [ 00:06:30.660 { 00:06:30.660 "dma_device_id": "system", 00:06:30.660 "dma_device_type": 1 00:06:30.660 }, 00:06:30.660 { 00:06:30.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:30.660 "dma_device_type": 2 00:06:30.660 } 00:06:30.660 ], 00:06:30.660 "driver_specific": {} 00:06:30.660 } 00:06:30.660 ]' 00:06:30.660 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:30.660 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:30.660 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:30.660 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.660 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.660 [2024-11-20 17:33:30.401958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:30.660 [2024-11-20 17:33:30.402005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:30.660 [2024-11-20 17:33:30.402029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23a8e00 00:06:30.660 [2024-11-20 17:33:30.402040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:30.660 [2024-11-20 17:33:30.403622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:30.661 [2024-11-20 17:33:30.403660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:30.661 Passthru0 00:06:30.661 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.661 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:30.661 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.661 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.661 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.661 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:30.661 { 00:06:30.661 "name": "Malloc0", 00:06:30.661 "aliases": [ 00:06:30.661 "dde5a060-5a2c-49ed-8bdc-fe3aec898d6b" 00:06:30.661 ], 00:06:30.661 "product_name": "Malloc disk", 00:06:30.661 "block_size": 512, 00:06:30.661 "num_blocks": 16384, 00:06:30.661 "uuid": "dde5a060-5a2c-49ed-8bdc-fe3aec898d6b", 00:06:30.661 "assigned_rate_limits": { 00:06:30.661 "rw_ios_per_sec": 0, 00:06:30.661 "rw_mbytes_per_sec": 0, 00:06:30.661 "r_mbytes_per_sec": 0, 00:06:30.661 "w_mbytes_per_sec": 0 00:06:30.661 }, 00:06:30.661 "claimed": true, 00:06:30.661 "claim_type": "exclusive_write", 00:06:30.661 "zoned": false, 00:06:30.661 "supported_io_types": { 00:06:30.661 "read": true, 00:06:30.661 "write": true, 00:06:30.661 "unmap": true, 00:06:30.661 "flush": true, 00:06:30.661 "reset": true, 00:06:30.661 "nvme_admin": false, 00:06:30.661 "nvme_io": false, 00:06:30.661 "nvme_io_md": false, 00:06:30.661 "write_zeroes": true, 00:06:30.661 "zcopy": true, 00:06:30.661 "get_zone_info": false, 00:06:30.661 "zone_management": false, 00:06:30.661 "zone_append": false, 00:06:30.661 "compare": false, 00:06:30.661 "compare_and_write": false, 00:06:30.661 "abort": true, 00:06:30.661 "seek_hole": false, 00:06:30.661 "seek_data": false, 00:06:30.661 "copy": true, 00:06:30.661 "nvme_iov_md": false 00:06:30.661 }, 00:06:30.661 "memory_domains": [ 00:06:30.661 { 00:06:30.661 "dma_device_id": "system", 00:06:30.661 "dma_device_type": 1 00:06:30.661 }, 00:06:30.661 { 00:06:30.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:30.661 "dma_device_type": 2 00:06:30.661 } 00:06:30.661 ], 00:06:30.661 "driver_specific": {} 00:06:30.661 }, 00:06:30.661 { 00:06:30.661 "name": "Passthru0", 00:06:30.661 "aliases": [ 00:06:30.661 "f93c3f37-a14a-5f13-84b1-40dadf9ef636" 00:06:30.661 ], 00:06:30.661 "product_name": "passthru", 00:06:30.661 "block_size": 512, 00:06:30.661 "num_blocks": 16384, 00:06:30.661 "uuid": "f93c3f37-a14a-5f13-84b1-40dadf9ef636", 00:06:30.661 "assigned_rate_limits": { 00:06:30.661 "rw_ios_per_sec": 0, 00:06:30.661 "rw_mbytes_per_sec": 0, 00:06:30.661 "r_mbytes_per_sec": 0, 00:06:30.661 "w_mbytes_per_sec": 0 00:06:30.661 }, 00:06:30.661 "claimed": false, 00:06:30.661 "zoned": false, 00:06:30.661 "supported_io_types": { 00:06:30.661 "read": true, 00:06:30.661 "write": true, 00:06:30.661 "unmap": true, 00:06:30.661 "flush": true, 00:06:30.661 "reset": true, 00:06:30.661 "nvme_admin": false, 00:06:30.661 "nvme_io": false, 00:06:30.661 "nvme_io_md": false, 00:06:30.661 "write_zeroes": true, 00:06:30.661 "zcopy": true, 00:06:30.661 "get_zone_info": false, 00:06:30.661 "zone_management": false, 00:06:30.661 "zone_append": false, 00:06:30.661 "compare": false, 00:06:30.661 "compare_and_write": false, 00:06:30.661 "abort": true, 00:06:30.661 "seek_hole": false, 00:06:30.661 "seek_data": false, 00:06:30.661 "copy": true, 00:06:30.661 "nvme_iov_md": false 00:06:30.661 }, 00:06:30.661 "memory_domains": [ 00:06:30.661 { 00:06:30.661 "dma_device_id": "system", 00:06:30.661 "dma_device_type": 1 00:06:30.661 }, 00:06:30.661 { 00:06:30.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:30.661 "dma_device_type": 2 00:06:30.661 } 00:06:30.661 ], 00:06:30.661 "driver_specific": { 00:06:30.661 "passthru": { 00:06:30.661 "name": "Passthru0", 00:06:30.661 "base_bdev_name": "Malloc0" 00:06:30.661 } 00:06:30.661 } 00:06:30.661 } 00:06:30.661 ]' 00:06:30.661 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:30.661 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:30.661 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:30.661 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.661 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.661 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.661 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:30.661 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.661 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.661 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.661 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:30.661 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.661 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.661 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.661 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:30.661 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:30.661 17:33:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:30.661 00:06:30.661 real 0m0.296s 00:06:30.661 user 0m0.184s 00:06:30.661 sys 0m0.044s 00:06:30.661 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.661 17:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:30.661 ************************************ 00:06:30.661 END TEST rpc_integrity 00:06:30.661 ************************************ 00:06:30.923 17:33:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:30.923 17:33:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.923 17:33:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.923 17:33:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.923 ************************************ 00:06:30.923 START TEST rpc_plugins 00:06:30.923 ************************************ 00:06:30.923 17:33:30 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:30.923 17:33:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:30.923 17:33:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.923 17:33:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:30.923 17:33:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.923 17:33:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:30.923 17:33:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:30.923 17:33:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.923 17:33:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:30.923 17:33:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.923 17:33:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:30.923 { 00:06:30.923 "name": "Malloc1", 00:06:30.923 "aliases": [ 00:06:30.923 "5e6d58d5-2e77-4eb5-80f9-5bf3a6a3c4a1" 00:06:30.923 ], 00:06:30.923 "product_name": "Malloc disk", 00:06:30.923 "block_size": 4096, 00:06:30.923 "num_blocks": 256, 00:06:30.923 "uuid": "5e6d58d5-2e77-4eb5-80f9-5bf3a6a3c4a1", 00:06:30.923 "assigned_rate_limits": { 00:06:30.923 "rw_ios_per_sec": 0, 00:06:30.923 "rw_mbytes_per_sec": 0, 00:06:30.923 "r_mbytes_per_sec": 0, 00:06:30.923 "w_mbytes_per_sec": 0 00:06:30.923 }, 00:06:30.923 "claimed": false, 00:06:30.923 "zoned": false, 00:06:30.923 "supported_io_types": { 00:06:30.923 "read": true, 00:06:30.923 "write": true, 00:06:30.923 "unmap": true, 00:06:30.923 "flush": true, 00:06:30.923 "reset": true, 00:06:30.923 "nvme_admin": false, 00:06:30.923 "nvme_io": false, 00:06:30.923 "nvme_io_md": false, 00:06:30.923 "write_zeroes": true, 00:06:30.923 "zcopy": true, 00:06:30.923 "get_zone_info": false, 00:06:30.923 "zone_management": false, 00:06:30.923 "zone_append": false, 00:06:30.923 "compare": false, 00:06:30.923 "compare_and_write": false, 00:06:30.923 "abort": true, 00:06:30.923 "seek_hole": false, 00:06:30.923 "seek_data": false, 00:06:30.923 "copy": true, 00:06:30.923 "nvme_iov_md": false 00:06:30.923 }, 00:06:30.923 "memory_domains": [ 00:06:30.923 { 00:06:30.923 "dma_device_id": "system", 00:06:30.923 "dma_device_type": 1 00:06:30.923 }, 00:06:30.923 { 00:06:30.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:30.923 "dma_device_type": 2 00:06:30.923 } 00:06:30.923 ], 00:06:30.923 "driver_specific": {} 00:06:30.923 } 00:06:30.923 ]' 00:06:30.923 17:33:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:30.923 17:33:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:30.923 17:33:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:30.923 17:33:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.923 17:33:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:30.923 17:33:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.923 17:33:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:30.923 17:33:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.923 17:33:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:30.923 17:33:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.923 17:33:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:30.923 17:33:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:30.923 17:33:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:30.923 00:06:30.923 real 0m0.140s 00:06:30.923 user 0m0.085s 00:06:30.923 sys 0m0.022s 00:06:30.923 17:33:30 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.923 17:33:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:30.923 ************************************ 00:06:30.923 END TEST rpc_plugins 00:06:30.923 ************************************ 00:06:30.923 17:33:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:30.923 17:33:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.923 17:33:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.923 17:33:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.185 ************************************ 00:06:31.185 START TEST rpc_trace_cmd_test 00:06:31.185 ************************************ 00:06:31.185 17:33:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:31.185 17:33:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:31.185 17:33:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:31.185 17:33:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.185 17:33:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.185 17:33:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.185 17:33:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:31.185 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2418376", 00:06:31.185 "tpoint_group_mask": "0x8", 00:06:31.185 "iscsi_conn": { 00:06:31.185 "mask": "0x2", 00:06:31.185 "tpoint_mask": "0x0" 00:06:31.185 }, 00:06:31.185 "scsi": { 00:06:31.185 "mask": "0x4", 00:06:31.185 "tpoint_mask": "0x0" 00:06:31.185 }, 00:06:31.185 "bdev": { 00:06:31.185 "mask": "0x8", 00:06:31.185 "tpoint_mask": "0xffffffffffffffff" 00:06:31.185 }, 00:06:31.185 "nvmf_rdma": { 00:06:31.185 "mask": "0x10", 00:06:31.185 "tpoint_mask": "0x0" 00:06:31.185 }, 00:06:31.185 "nvmf_tcp": { 00:06:31.185 "mask": "0x20", 00:06:31.185 "tpoint_mask": "0x0" 00:06:31.185 }, 00:06:31.185 "ftl": { 00:06:31.185 "mask": "0x40", 00:06:31.185 "tpoint_mask": "0x0" 00:06:31.185 }, 00:06:31.185 "blobfs": { 00:06:31.185 "mask": "0x80", 00:06:31.185 "tpoint_mask": "0x0" 00:06:31.185 }, 00:06:31.185 "dsa": { 00:06:31.185 "mask": "0x200", 00:06:31.185 "tpoint_mask": "0x0" 00:06:31.185 }, 00:06:31.185 "thread": { 00:06:31.185 "mask": "0x400", 00:06:31.185 "tpoint_mask": "0x0" 00:06:31.185 }, 00:06:31.185 "nvme_pcie": { 00:06:31.185 "mask": "0x800", 00:06:31.185 "tpoint_mask": "0x0" 00:06:31.185 }, 00:06:31.185 "iaa": { 00:06:31.185 "mask": "0x1000", 00:06:31.185 "tpoint_mask": "0x0" 00:06:31.185 }, 00:06:31.185 "nvme_tcp": { 00:06:31.185 "mask": "0x2000", 00:06:31.185 "tpoint_mask": "0x0" 00:06:31.185 }, 00:06:31.185 "bdev_nvme": { 00:06:31.185 "mask": "0x4000", 00:06:31.185 "tpoint_mask": "0x0" 00:06:31.185 }, 00:06:31.185 "sock": { 00:06:31.185 "mask": "0x8000", 00:06:31.185 "tpoint_mask": "0x0" 00:06:31.185 }, 00:06:31.185 "blob": { 00:06:31.185 "mask": "0x10000", 00:06:31.185 "tpoint_mask": "0x0" 00:06:31.185 }, 00:06:31.185 "bdev_raid": { 00:06:31.185 "mask": "0x20000", 00:06:31.185 "tpoint_mask": "0x0" 00:06:31.185 } 00:06:31.185 }' 00:06:31.185 17:33:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:31.185 17:33:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:31.185 17:33:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:31.185 17:33:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:31.185 17:33:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:31.185 17:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:31.185 17:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:31.185 17:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:31.185 17:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:31.448 17:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:31.448 00:06:31.448 real 0m0.238s 00:06:31.448 user 0m0.186s 00:06:31.448 sys 0m0.040s 00:06:31.448 17:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.448 17:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.448 ************************************ 00:06:31.448 END TEST rpc_trace_cmd_test 00:06:31.448 ************************************ 00:06:31.448 17:33:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:31.448 17:33:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:31.448 17:33:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:31.448 17:33:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.448 17:33:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.448 17:33:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.448 ************************************ 00:06:31.448 START TEST rpc_daemon_integrity 00:06:31.448 ************************************ 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:31.448 { 00:06:31.448 "name": "Malloc2", 00:06:31.448 "aliases": [ 00:06:31.448 "43f48350-bd56-4704-9cb4-e70e9d8b5d6d" 00:06:31.448 ], 00:06:31.448 "product_name": "Malloc disk", 00:06:31.448 "block_size": 512, 00:06:31.448 "num_blocks": 16384, 00:06:31.448 "uuid": "43f48350-bd56-4704-9cb4-e70e9d8b5d6d", 00:06:31.448 "assigned_rate_limits": { 00:06:31.448 "rw_ios_per_sec": 0, 00:06:31.448 "rw_mbytes_per_sec": 0, 00:06:31.448 "r_mbytes_per_sec": 0, 00:06:31.448 "w_mbytes_per_sec": 0 00:06:31.448 }, 00:06:31.448 "claimed": false, 00:06:31.448 "zoned": false, 00:06:31.448 "supported_io_types": { 00:06:31.448 "read": true, 00:06:31.448 "write": true, 00:06:31.448 "unmap": true, 00:06:31.448 "flush": true, 00:06:31.448 "reset": true, 00:06:31.448 "nvme_admin": false, 00:06:31.448 "nvme_io": false, 00:06:31.448 "nvme_io_md": false, 00:06:31.448 "write_zeroes": true, 00:06:31.448 "zcopy": true, 00:06:31.448 "get_zone_info": false, 00:06:31.448 "zone_management": false, 00:06:31.448 "zone_append": false, 00:06:31.448 "compare": false, 00:06:31.448 "compare_and_write": false, 00:06:31.448 "abort": true, 00:06:31.448 "seek_hole": false, 00:06:31.448 "seek_data": false, 00:06:31.448 "copy": true, 00:06:31.448 "nvme_iov_md": false 00:06:31.448 }, 00:06:31.448 "memory_domains": [ 00:06:31.448 { 00:06:31.448 "dma_device_id": "system", 00:06:31.448 "dma_device_type": 1 00:06:31.448 }, 00:06:31.448 { 00:06:31.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:31.448 "dma_device_type": 2 00:06:31.448 } 00:06:31.448 ], 00:06:31.448 "driver_specific": {} 00:06:31.448 } 00:06:31.448 ]' 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.448 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:31.448 [2024-11-20 17:33:31.328521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:31.448 [2024-11-20 17:33:31.328564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:31.449 [2024-11-20 17:33:31.328593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22654d0 00:06:31.449 [2024-11-20 17:33:31.328604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:31.449 [2024-11-20 17:33:31.330246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:31.449 [2024-11-20 17:33:31.330286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:31.449 Passthru0 00:06:31.449 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.449 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:31.449 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.449 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:31.449 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.761 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:31.761 { 00:06:31.761 "name": "Malloc2", 00:06:31.761 "aliases": [ 00:06:31.761 "43f48350-bd56-4704-9cb4-e70e9d8b5d6d" 00:06:31.761 ], 00:06:31.761 "product_name": "Malloc disk", 00:06:31.761 "block_size": 512, 00:06:31.761 "num_blocks": 16384, 00:06:31.761 "uuid": "43f48350-bd56-4704-9cb4-e70e9d8b5d6d", 00:06:31.761 "assigned_rate_limits": { 00:06:31.761 "rw_ios_per_sec": 0, 00:06:31.761 "rw_mbytes_per_sec": 0, 00:06:31.761 "r_mbytes_per_sec": 0, 00:06:31.761 "w_mbytes_per_sec": 0 00:06:31.761 }, 00:06:31.761 "claimed": true, 00:06:31.761 "claim_type": "exclusive_write", 00:06:31.761 "zoned": false, 00:06:31.761 "supported_io_types": { 00:06:31.761 "read": true, 00:06:31.761 "write": true, 00:06:31.761 "unmap": true, 00:06:31.761 "flush": true, 00:06:31.761 "reset": true, 00:06:31.761 "nvme_admin": false, 00:06:31.761 "nvme_io": false, 00:06:31.761 "nvme_io_md": false, 00:06:31.761 "write_zeroes": true, 00:06:31.761 "zcopy": true, 00:06:31.761 "get_zone_info": false, 00:06:31.761 "zone_management": false, 00:06:31.761 "zone_append": false, 00:06:31.761 "compare": false, 00:06:31.761 "compare_and_write": false, 00:06:31.761 "abort": true, 00:06:31.761 "seek_hole": false, 00:06:31.761 "seek_data": false, 00:06:31.761 "copy": true, 00:06:31.761 "nvme_iov_md": false 00:06:31.761 }, 00:06:31.761 "memory_domains": [ 00:06:31.761 { 00:06:31.761 "dma_device_id": "system", 00:06:31.761 "dma_device_type": 1 00:06:31.761 }, 00:06:31.761 { 00:06:31.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:31.761 "dma_device_type": 2 00:06:31.761 } 00:06:31.761 ], 00:06:31.761 "driver_specific": {} 00:06:31.761 }, 00:06:31.761 { 00:06:31.761 "name": "Passthru0", 00:06:31.761 "aliases": [ 00:06:31.761 "c7df846a-a917-5dd0-b6c8-d5cefde8ff34" 00:06:31.761 ], 00:06:31.761 "product_name": "passthru", 00:06:31.761 "block_size": 512, 00:06:31.761 "num_blocks": 16384, 00:06:31.761 "uuid": "c7df846a-a917-5dd0-b6c8-d5cefde8ff34", 00:06:31.761 "assigned_rate_limits": { 00:06:31.761 "rw_ios_per_sec": 0, 00:06:31.761 "rw_mbytes_per_sec": 0, 00:06:31.761 "r_mbytes_per_sec": 0, 00:06:31.761 "w_mbytes_per_sec": 0 00:06:31.761 }, 00:06:31.761 "claimed": false, 00:06:31.761 "zoned": false, 00:06:31.761 "supported_io_types": { 00:06:31.761 "read": true, 00:06:31.761 "write": true, 00:06:31.761 "unmap": true, 00:06:31.761 "flush": true, 00:06:31.761 "reset": true, 00:06:31.761 "nvme_admin": false, 00:06:31.761 "nvme_io": false, 00:06:31.761 "nvme_io_md": false, 00:06:31.761 "write_zeroes": true, 00:06:31.761 "zcopy": true, 00:06:31.761 "get_zone_info": false, 00:06:31.761 "zone_management": false, 00:06:31.761 "zone_append": false, 00:06:31.761 "compare": false, 00:06:31.761 "compare_and_write": false, 00:06:31.761 "abort": true, 00:06:31.761 "seek_hole": false, 00:06:31.761 "seek_data": false, 00:06:31.761 "copy": true, 00:06:31.761 "nvme_iov_md": false 00:06:31.761 }, 00:06:31.761 "memory_domains": [ 00:06:31.761 { 00:06:31.761 "dma_device_id": "system", 00:06:31.761 "dma_device_type": 1 00:06:31.761 }, 00:06:31.761 { 00:06:31.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:31.761 "dma_device_type": 2 00:06:31.761 } 00:06:31.761 ], 00:06:31.762 "driver_specific": { 00:06:31.762 "passthru": { 00:06:31.762 "name": "Passthru0", 00:06:31.762 "base_bdev_name": "Malloc2" 00:06:31.762 } 00:06:31.762 } 00:06:31.762 } 00:06:31.762 ]' 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:31.762 00:06:31.762 real 0m0.305s 00:06:31.762 user 0m0.188s 00:06:31.762 sys 0m0.048s 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.762 17:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:31.762 ************************************ 00:06:31.762 END TEST rpc_daemon_integrity 00:06:31.762 ************************************ 00:06:31.762 17:33:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:31.762 17:33:31 rpc -- rpc/rpc.sh@84 -- # killprocess 2418376 00:06:31.762 17:33:31 rpc -- common/autotest_common.sh@950 -- # '[' -z 2418376 ']' 00:06:31.762 17:33:31 rpc -- common/autotest_common.sh@954 -- # kill -0 2418376 00:06:31.762 17:33:31 rpc -- common/autotest_common.sh@955 -- # uname 00:06:31.762 17:33:31 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.762 17:33:31 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2418376 00:06:31.762 17:33:31 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.762 17:33:31 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.762 17:33:31 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2418376' 00:06:31.762 killing process with pid 2418376 00:06:31.762 17:33:31 rpc -- common/autotest_common.sh@969 -- # kill 2418376 00:06:31.762 17:33:31 rpc -- common/autotest_common.sh@974 -- # wait 2418376 00:06:32.068 00:06:32.068 real 0m2.698s 00:06:32.068 user 0m3.415s 00:06:32.068 sys 0m0.845s 00:06:32.068 17:33:31 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.068 17:33:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.068 ************************************ 00:06:32.068 END TEST rpc 00:06:32.068 ************************************ 00:06:32.068 17:33:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:32.068 17:33:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.068 17:33:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.068 17:33:31 -- common/autotest_common.sh@10 -- # set +x 00:06:32.068 ************************************ 00:06:32.068 START TEST skip_rpc 00:06:32.068 ************************************ 00:06:32.068 17:33:31 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:32.330 * Looking for test storage... 00:06:32.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:32.330 17:33:32 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:32.330 17:33:32 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:32.330 17:33:32 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:32.330 17:33:32 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.330 17:33:32 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:32.330 17:33:32 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.330 17:33:32 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:32.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.330 --rc genhtml_branch_coverage=1 00:06:32.330 --rc genhtml_function_coverage=1 00:06:32.330 --rc genhtml_legend=1 00:06:32.330 --rc geninfo_all_blocks=1 00:06:32.330 --rc geninfo_unexecuted_blocks=1 00:06:32.330 00:06:32.330 ' 00:06:32.330 17:33:32 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:32.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.330 --rc genhtml_branch_coverage=1 00:06:32.330 --rc genhtml_function_coverage=1 00:06:32.330 --rc genhtml_legend=1 00:06:32.330 --rc geninfo_all_blocks=1 00:06:32.330 --rc geninfo_unexecuted_blocks=1 00:06:32.330 00:06:32.330 ' 00:06:32.330 17:33:32 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:32.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.331 --rc genhtml_branch_coverage=1 00:06:32.331 --rc genhtml_function_coverage=1 00:06:32.331 --rc genhtml_legend=1 00:06:32.331 --rc geninfo_all_blocks=1 00:06:32.331 --rc geninfo_unexecuted_blocks=1 00:06:32.331 00:06:32.331 ' 00:06:32.331 17:33:32 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:32.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.331 --rc genhtml_branch_coverage=1 00:06:32.331 --rc genhtml_function_coverage=1 00:06:32.331 --rc genhtml_legend=1 00:06:32.331 --rc geninfo_all_blocks=1 00:06:32.331 --rc geninfo_unexecuted_blocks=1 00:06:32.331 00:06:32.331 ' 00:06:32.331 17:33:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:32.331 17:33:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:32.331 17:33:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:32.331 17:33:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.331 17:33:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.331 17:33:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.331 ************************************ 00:06:32.331 START TEST skip_rpc 00:06:32.331 ************************************ 00:06:32.331 17:33:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:32.331 17:33:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2419231 00:06:32.331 17:33:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.331 17:33:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:32.331 17:33:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:32.331 [2024-11-20 17:33:32.224117] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:32.331 [2024-11-20 17:33:32.224186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2419231 ] 00:06:32.592 [2024-11-20 17:33:32.307540] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.592 [2024-11-20 17:33:32.354525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.887 17:33:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:37.887 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:37.887 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:37.887 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:37.887 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.887 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:37.887 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.887 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:37.887 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.887 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.887 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:37.887 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:37.887 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.887 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:37.888 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.888 17:33:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:37.888 17:33:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2419231 00:06:37.888 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2419231 ']' 00:06:37.888 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2419231 00:06:37.888 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:37.888 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.888 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2419231 00:06:37.888 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.888 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.888 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2419231' 00:06:37.888 killing process with pid 2419231 00:06:37.888 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2419231 00:06:37.888 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2419231 00:06:37.888 00:06:37.888 real 0m5.270s 00:06:37.888 user 0m5.030s 00:06:37.888 sys 0m0.285s 00:06:37.888 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.888 17:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.888 ************************************ 00:06:37.888 END TEST skip_rpc 00:06:37.888 ************************************ 00:06:37.888 17:33:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:37.888 17:33:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.888 17:33:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.888 17:33:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.888 ************************************ 00:06:37.888 START TEST skip_rpc_with_json 00:06:37.888 ************************************ 00:06:37.888 17:33:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:37.888 17:33:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:37.888 17:33:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2420269 00:06:37.888 17:33:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:37.888 17:33:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2420269 00:06:37.888 17:33:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.888 17:33:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2420269 ']' 00:06:37.888 17:33:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.888 17:33:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.888 17:33:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.888 17:33:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.888 17:33:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:37.888 [2024-11-20 17:33:37.577536] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:37.888 [2024-11-20 17:33:37.577588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2420269 ] 00:06:37.888 [2024-11-20 17:33:37.652875] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.888 [2024-11-20 17:33:37.695778] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.461 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.461 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:38.461 17:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:38.461 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.461 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:38.461 [2024-11-20 17:33:38.357476] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:38.461 request: 00:06:38.461 { 00:06:38.461 "trtype": "tcp", 00:06:38.461 "method": "nvmf_get_transports", 00:06:38.461 "req_id": 1 00:06:38.461 } 00:06:38.461 Got JSON-RPC error response 00:06:38.461 response: 00:06:38.461 { 00:06:38.461 "code": -19, 00:06:38.461 "message": "No such device" 00:06:38.461 } 00:06:38.461 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:38.461 17:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:38.461 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.461 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:38.461 [2024-11-20 17:33:38.369574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.722 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.722 17:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:38.722 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.722 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:38.722 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.722 17:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:38.722 { 00:06:38.722 "subsystems": [ 00:06:38.722 { 00:06:38.722 "subsystem": "fsdev", 00:06:38.722 "config": [ 00:06:38.722 { 00:06:38.722 "method": "fsdev_set_opts", 00:06:38.722 "params": { 00:06:38.722 "fsdev_io_pool_size": 65535, 00:06:38.722 "fsdev_io_cache_size": 256 00:06:38.722 } 00:06:38.722 } 00:06:38.722 ] 00:06:38.722 }, 00:06:38.722 { 00:06:38.722 "subsystem": "vfio_user_target", 00:06:38.722 "config": null 00:06:38.722 }, 00:06:38.722 { 00:06:38.722 "subsystem": "keyring", 00:06:38.722 "config": [] 00:06:38.722 }, 00:06:38.722 { 00:06:38.722 "subsystem": "iobuf", 00:06:38.722 "config": [ 00:06:38.722 { 00:06:38.722 "method": "iobuf_set_options", 00:06:38.722 "params": { 00:06:38.722 "small_pool_count": 8192, 00:06:38.722 "large_pool_count": 1024, 00:06:38.722 "small_bufsize": 8192, 00:06:38.722 "large_bufsize": 135168 00:06:38.722 } 00:06:38.722 } 00:06:38.722 ] 00:06:38.722 }, 00:06:38.722 { 00:06:38.722 "subsystem": "sock", 00:06:38.722 "config": [ 00:06:38.722 { 00:06:38.722 "method": "sock_set_default_impl", 00:06:38.722 "params": { 00:06:38.722 "impl_name": "posix" 00:06:38.722 } 00:06:38.722 }, 00:06:38.722 { 00:06:38.722 "method": "sock_impl_set_options", 00:06:38.722 "params": { 00:06:38.722 "impl_name": "ssl", 00:06:38.722 "recv_buf_size": 4096, 00:06:38.722 "send_buf_size": 4096, 00:06:38.722 "enable_recv_pipe": true, 00:06:38.722 "enable_quickack": false, 00:06:38.722 "enable_placement_id": 0, 00:06:38.722 "enable_zerocopy_send_server": true, 00:06:38.722 "enable_zerocopy_send_client": false, 00:06:38.722 "zerocopy_threshold": 0, 00:06:38.722 "tls_version": 0, 00:06:38.722 "enable_ktls": false 00:06:38.722 } 00:06:38.722 }, 00:06:38.722 { 00:06:38.722 "method": "sock_impl_set_options", 00:06:38.722 "params": { 00:06:38.722 "impl_name": "posix", 00:06:38.722 "recv_buf_size": 2097152, 00:06:38.722 "send_buf_size": 2097152, 00:06:38.722 "enable_recv_pipe": true, 00:06:38.722 "enable_quickack": false, 00:06:38.722 "enable_placement_id": 0, 00:06:38.722 "enable_zerocopy_send_server": true, 00:06:38.722 "enable_zerocopy_send_client": false, 00:06:38.722 "zerocopy_threshold": 0, 00:06:38.722 "tls_version": 0, 00:06:38.722 "enable_ktls": false 00:06:38.722 } 00:06:38.722 } 00:06:38.722 ] 00:06:38.722 }, 00:06:38.722 { 00:06:38.722 "subsystem": "vmd", 00:06:38.722 "config": [] 00:06:38.722 }, 00:06:38.722 { 00:06:38.722 "subsystem": "accel", 00:06:38.722 "config": [ 00:06:38.722 { 00:06:38.722 "method": "accel_set_options", 00:06:38.722 "params": { 00:06:38.722 "small_cache_size": 128, 00:06:38.722 "large_cache_size": 16, 00:06:38.722 "task_count": 2048, 00:06:38.722 "sequence_count": 2048, 00:06:38.722 "buf_count": 2048 00:06:38.722 } 00:06:38.722 } 00:06:38.722 ] 00:06:38.722 }, 00:06:38.722 { 00:06:38.722 "subsystem": "bdev", 00:06:38.722 "config": [ 00:06:38.722 { 00:06:38.722 "method": "bdev_set_options", 00:06:38.722 "params": { 00:06:38.722 "bdev_io_pool_size": 65535, 00:06:38.722 "bdev_io_cache_size": 256, 00:06:38.722 "bdev_auto_examine": true, 00:06:38.722 "iobuf_small_cache_size": 128, 00:06:38.722 "iobuf_large_cache_size": 16 00:06:38.722 } 00:06:38.722 }, 00:06:38.722 { 00:06:38.722 "method": "bdev_raid_set_options", 00:06:38.722 "params": { 00:06:38.722 "process_window_size_kb": 1024, 00:06:38.722 "process_max_bandwidth_mb_sec": 0 00:06:38.722 } 00:06:38.722 }, 00:06:38.722 { 00:06:38.722 "method": "bdev_iscsi_set_options", 00:06:38.722 "params": { 00:06:38.722 "timeout_sec": 30 00:06:38.722 } 00:06:38.722 }, 00:06:38.722 { 00:06:38.722 "method": "bdev_nvme_set_options", 00:06:38.722 "params": { 00:06:38.722 "action_on_timeout": "none", 00:06:38.722 "timeout_us": 0, 00:06:38.722 "timeout_admin_us": 0, 00:06:38.722 "keep_alive_timeout_ms": 10000, 00:06:38.722 "arbitration_burst": 0, 00:06:38.722 "low_priority_weight": 0, 00:06:38.722 "medium_priority_weight": 0, 00:06:38.722 "high_priority_weight": 0, 00:06:38.722 "nvme_adminq_poll_period_us": 10000, 00:06:38.722 "nvme_ioq_poll_period_us": 0, 00:06:38.722 "io_queue_requests": 0, 00:06:38.722 "delay_cmd_submit": true, 00:06:38.722 "transport_retry_count": 4, 00:06:38.722 "bdev_retry_count": 3, 00:06:38.722 "transport_ack_timeout": 0, 00:06:38.722 "ctrlr_loss_timeout_sec": 0, 00:06:38.723 "reconnect_delay_sec": 0, 00:06:38.723 "fast_io_fail_timeout_sec": 0, 00:06:38.723 "disable_auto_failback": false, 00:06:38.723 "generate_uuids": false, 00:06:38.723 "transport_tos": 0, 00:06:38.723 "nvme_error_stat": false, 00:06:38.723 "rdma_srq_size": 0, 00:06:38.723 "io_path_stat": false, 00:06:38.723 "allow_accel_sequence": false, 00:06:38.723 "rdma_max_cq_size": 0, 00:06:38.723 "rdma_cm_event_timeout_ms": 0, 00:06:38.723 "dhchap_digests": [ 00:06:38.723 "sha256", 00:06:38.723 "sha384", 00:06:38.723 "sha512" 00:06:38.723 ], 00:06:38.723 "dhchap_dhgroups": [ 00:06:38.723 "null", 00:06:38.723 "ffdhe2048", 00:06:38.723 "ffdhe3072", 00:06:38.723 "ffdhe4096", 00:06:38.723 "ffdhe6144", 00:06:38.723 "ffdhe8192" 00:06:38.723 ] 00:06:38.723 } 00:06:38.723 }, 00:06:38.723 { 00:06:38.723 "method": "bdev_nvme_set_hotplug", 00:06:38.723 "params": { 00:06:38.723 "period_us": 100000, 00:06:38.723 "enable": false 00:06:38.723 } 00:06:38.723 }, 00:06:38.723 { 00:06:38.723 "method": "bdev_wait_for_examine" 00:06:38.723 } 00:06:38.723 ] 00:06:38.723 }, 00:06:38.723 { 00:06:38.723 "subsystem": "scsi", 00:06:38.723 "config": null 00:06:38.723 }, 00:06:38.723 { 00:06:38.723 "subsystem": "scheduler", 00:06:38.723 "config": [ 00:06:38.723 { 00:06:38.723 "method": "framework_set_scheduler", 00:06:38.723 "params": { 00:06:38.723 "name": "static" 00:06:38.723 } 00:06:38.723 } 00:06:38.723 ] 00:06:38.723 }, 00:06:38.723 { 00:06:38.723 "subsystem": "vhost_scsi", 00:06:38.723 "config": [] 00:06:38.723 }, 00:06:38.723 { 00:06:38.723 "subsystem": "vhost_blk", 00:06:38.723 "config": [] 00:06:38.723 }, 00:06:38.723 { 00:06:38.723 "subsystem": "ublk", 00:06:38.723 "config": [] 00:06:38.723 }, 00:06:38.723 { 00:06:38.723 "subsystem": "nbd", 00:06:38.723 "config": [] 00:06:38.723 }, 00:06:38.723 { 00:06:38.723 "subsystem": "nvmf", 00:06:38.723 "config": [ 00:06:38.723 { 00:06:38.723 "method": "nvmf_set_config", 00:06:38.723 "params": { 00:06:38.723 "discovery_filter": "match_any", 00:06:38.723 "admin_cmd_passthru": { 00:06:38.723 "identify_ctrlr": false 00:06:38.723 }, 00:06:38.723 "dhchap_digests": [ 00:06:38.723 "sha256", 00:06:38.723 "sha384", 00:06:38.723 "sha512" 00:06:38.723 ], 00:06:38.723 "dhchap_dhgroups": [ 00:06:38.723 "null", 00:06:38.723 "ffdhe2048", 00:06:38.723 "ffdhe3072", 00:06:38.723 "ffdhe4096", 00:06:38.723 "ffdhe6144", 00:06:38.723 "ffdhe8192" 00:06:38.723 ] 00:06:38.723 } 00:06:38.723 }, 00:06:38.723 { 00:06:38.723 "method": "nvmf_set_max_subsystems", 00:06:38.723 "params": { 00:06:38.723 "max_subsystems": 1024 00:06:38.723 } 00:06:38.723 }, 00:06:38.723 { 00:06:38.723 "method": "nvmf_set_crdt", 00:06:38.723 "params": { 00:06:38.723 "crdt1": 0, 00:06:38.723 "crdt2": 0, 00:06:38.723 "crdt3": 0 00:06:38.723 } 00:06:38.723 }, 00:06:38.723 { 00:06:38.723 "method": "nvmf_create_transport", 00:06:38.723 "params": { 00:06:38.723 "trtype": "TCP", 00:06:38.723 "max_queue_depth": 128, 00:06:38.723 "max_io_qpairs_per_ctrlr": 127, 00:06:38.723 "in_capsule_data_size": 4096, 00:06:38.723 "max_io_size": 131072, 00:06:38.723 "io_unit_size": 131072, 00:06:38.723 "max_aq_depth": 128, 00:06:38.723 "num_shared_buffers": 511, 00:06:38.723 "buf_cache_size": 4294967295, 00:06:38.723 "dif_insert_or_strip": false, 00:06:38.723 "zcopy": false, 00:06:38.723 "c2h_success": true, 00:06:38.723 "sock_priority": 0, 00:06:38.723 "abort_timeout_sec": 1, 00:06:38.723 "ack_timeout": 0, 00:06:38.723 "data_wr_pool_size": 0 00:06:38.723 } 00:06:38.723 } 00:06:38.723 ] 00:06:38.723 }, 00:06:38.723 { 00:06:38.723 "subsystem": "iscsi", 00:06:38.723 "config": [ 00:06:38.723 { 00:06:38.723 "method": "iscsi_set_options", 00:06:38.723 "params": { 00:06:38.723 "node_base": "iqn.2016-06.io.spdk", 00:06:38.723 "max_sessions": 128, 00:06:38.723 "max_connections_per_session": 2, 00:06:38.723 "max_queue_depth": 64, 00:06:38.723 "default_time2wait": 2, 00:06:38.723 "default_time2retain": 20, 00:06:38.723 "first_burst_length": 8192, 00:06:38.723 "immediate_data": true, 00:06:38.723 "allow_duplicated_isid": false, 00:06:38.723 "error_recovery_level": 0, 00:06:38.723 "nop_timeout": 60, 00:06:38.723 "nop_in_interval": 30, 00:06:38.723 "disable_chap": false, 00:06:38.723 "require_chap": false, 00:06:38.723 "mutual_chap": false, 00:06:38.723 "chap_group": 0, 00:06:38.723 "max_large_datain_per_connection": 64, 00:06:38.723 "max_r2t_per_connection": 4, 00:06:38.723 "pdu_pool_size": 36864, 00:06:38.723 "immediate_data_pool_size": 16384, 00:06:38.723 "data_out_pool_size": 2048 00:06:38.723 } 00:06:38.723 } 00:06:38.723 ] 00:06:38.723 } 00:06:38.723 ] 00:06:38.723 } 00:06:38.723 17:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:38.723 17:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2420269 00:06:38.723 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2420269 ']' 00:06:38.723 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2420269 00:06:38.723 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:38.723 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.723 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2420269 00:06:38.723 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.723 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.723 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2420269' 00:06:38.723 killing process with pid 2420269 00:06:38.723 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2420269 00:06:38.723 17:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2420269 00:06:38.984 17:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2420613 00:06:38.984 17:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:38.984 17:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:44.272 17:33:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2420613 00:06:44.272 17:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2420613 ']' 00:06:44.272 17:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2420613 00:06:44.272 17:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:44.273 17:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.273 17:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2420613 00:06:44.273 17:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.273 17:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.273 17:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2420613' 00:06:44.273 killing process with pid 2420613 00:06:44.273 17:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2420613 00:06:44.273 17:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2420613 00:06:44.273 17:33:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:44.273 17:33:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:44.273 00:06:44.273 real 0m6.563s 00:06:44.273 user 0m6.464s 00:06:44.273 sys 0m0.568s 00:06:44.273 17:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.273 17:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:44.273 ************************************ 00:06:44.273 END TEST skip_rpc_with_json 00:06:44.273 ************************************ 00:06:44.273 17:33:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:44.273 17:33:44 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.273 17:33:44 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.273 17:33:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.273 ************************************ 00:06:44.273 START TEST skip_rpc_with_delay 00:06:44.273 ************************************ 00:06:44.273 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:44.273 17:33:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:44.273 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:44.273 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:44.273 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:44.273 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.273 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:44.273 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.273 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:44.273 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.273 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:44.273 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:44.273 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:44.534 [2024-11-20 17:33:44.211462] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:44.534 [2024-11-20 17:33:44.211542] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:44.534 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:44.534 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.534 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:44.534 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.534 00:06:44.534 real 0m0.076s 00:06:44.534 user 0m0.043s 00:06:44.534 sys 0m0.033s 00:06:44.534 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.534 17:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:44.534 ************************************ 00:06:44.534 END TEST skip_rpc_with_delay 00:06:44.534 ************************************ 00:06:44.534 17:33:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:44.534 17:33:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:44.534 17:33:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:44.534 17:33:44 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.534 17:33:44 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.534 17:33:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.534 ************************************ 00:06:44.534 START TEST exit_on_failed_rpc_init 00:06:44.534 ************************************ 00:06:44.534 17:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:44.535 17:33:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2421679 00:06:44.535 17:33:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2421679 00:06:44.535 17:33:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.535 17:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2421679 ']' 00:06:44.535 17:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.535 17:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.535 17:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.535 17:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.535 17:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:44.535 [2024-11-20 17:33:44.369389] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:44.535 [2024-11-20 17:33:44.369442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2421679 ] 00:06:44.535 [2024-11-20 17:33:44.444546] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.795 [2024-11-20 17:33:44.474067] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.367 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.367 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:45.367 17:33:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.367 17:33:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:45.367 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:45.367 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:45.367 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:45.367 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.367 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:45.367 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.367 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:45.367 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.367 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:45.367 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:45.367 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:45.367 [2024-11-20 17:33:45.227814] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:45.367 [2024-11-20 17:33:45.227870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2421949 ] 00:06:45.627 [2024-11-20 17:33:45.302065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.627 [2024-11-20 17:33:45.332750] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.627 [2024-11-20 17:33:45.332811] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:45.627 [2024-11-20 17:33:45.332821] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:45.627 [2024-11-20 17:33:45.332828] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2421679 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2421679 ']' 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2421679 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2421679 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2421679' 00:06:45.627 killing process with pid 2421679 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2421679 00:06:45.627 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2421679 00:06:45.887 00:06:45.887 real 0m1.321s 00:06:45.887 user 0m1.533s 00:06:45.887 sys 0m0.390s 00:06:45.887 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.887 17:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:45.887 ************************************ 00:06:45.887 END TEST exit_on_failed_rpc_init 00:06:45.888 ************************************ 00:06:45.888 17:33:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:45.888 00:06:45.888 real 0m13.752s 00:06:45.888 user 0m13.280s 00:06:45.888 sys 0m1.616s 00:06:45.888 17:33:45 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.888 17:33:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.888 ************************************ 00:06:45.888 END TEST skip_rpc 00:06:45.888 ************************************ 00:06:45.888 17:33:45 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:45.888 17:33:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.888 17:33:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.888 17:33:45 -- common/autotest_common.sh@10 -- # set +x 00:06:45.888 ************************************ 00:06:45.888 START TEST rpc_client 00:06:45.888 ************************************ 00:06:45.888 17:33:45 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:46.148 * Looking for test storage... 00:06:46.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:46.148 17:33:45 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:46.148 17:33:45 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:46.148 17:33:45 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:46.148 17:33:45 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:46.148 17:33:45 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.149 17:33:45 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:46.149 17:33:45 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.149 17:33:45 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.149 17:33:45 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.149 17:33:45 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:46.149 17:33:45 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.149 17:33:45 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:46.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.149 --rc genhtml_branch_coverage=1 00:06:46.149 --rc genhtml_function_coverage=1 00:06:46.149 --rc genhtml_legend=1 00:06:46.149 --rc geninfo_all_blocks=1 00:06:46.149 --rc geninfo_unexecuted_blocks=1 00:06:46.149 00:06:46.149 ' 00:06:46.149 17:33:45 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:46.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.149 --rc genhtml_branch_coverage=1 00:06:46.149 --rc genhtml_function_coverage=1 00:06:46.149 --rc genhtml_legend=1 00:06:46.149 --rc geninfo_all_blocks=1 00:06:46.149 --rc geninfo_unexecuted_blocks=1 00:06:46.149 00:06:46.149 ' 00:06:46.149 17:33:45 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:46.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.149 --rc genhtml_branch_coverage=1 00:06:46.149 --rc genhtml_function_coverage=1 00:06:46.149 --rc genhtml_legend=1 00:06:46.149 --rc geninfo_all_blocks=1 00:06:46.149 --rc geninfo_unexecuted_blocks=1 00:06:46.149 00:06:46.149 ' 00:06:46.149 17:33:45 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:46.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.149 --rc genhtml_branch_coverage=1 00:06:46.149 --rc genhtml_function_coverage=1 00:06:46.149 --rc genhtml_legend=1 00:06:46.149 --rc geninfo_all_blocks=1 00:06:46.149 --rc geninfo_unexecuted_blocks=1 00:06:46.149 00:06:46.149 ' 00:06:46.149 17:33:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:46.149 OK 00:06:46.149 17:33:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:46.149 00:06:46.149 real 0m0.226s 00:06:46.149 user 0m0.142s 00:06:46.149 sys 0m0.098s 00:06:46.149 17:33:45 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.149 17:33:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:46.149 ************************************ 00:06:46.149 END TEST rpc_client 00:06:46.149 ************************************ 00:06:46.149 17:33:46 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:46.149 17:33:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.149 17:33:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.149 17:33:46 -- common/autotest_common.sh@10 -- # set +x 00:06:46.411 ************************************ 00:06:46.411 START TEST json_config 00:06:46.411 ************************************ 00:06:46.411 17:33:46 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:46.411 17:33:46 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:46.411 17:33:46 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:46.411 17:33:46 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:46.411 17:33:46 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:46.411 17:33:46 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.411 17:33:46 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.411 17:33:46 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.411 17:33:46 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.411 17:33:46 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.411 17:33:46 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.411 17:33:46 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.411 17:33:46 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.411 17:33:46 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.411 17:33:46 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.411 17:33:46 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.411 17:33:46 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:46.411 17:33:46 json_config -- scripts/common.sh@345 -- # : 1 00:06:46.411 17:33:46 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.411 17:33:46 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.411 17:33:46 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:46.411 17:33:46 json_config -- scripts/common.sh@353 -- # local d=1 00:06:46.411 17:33:46 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.411 17:33:46 json_config -- scripts/common.sh@355 -- # echo 1 00:06:46.411 17:33:46 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.411 17:33:46 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:46.411 17:33:46 json_config -- scripts/common.sh@353 -- # local d=2 00:06:46.411 17:33:46 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.411 17:33:46 json_config -- scripts/common.sh@355 -- # echo 2 00:06:46.411 17:33:46 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.411 17:33:46 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.411 17:33:46 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.411 17:33:46 json_config -- scripts/common.sh@368 -- # return 0 00:06:46.411 17:33:46 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.411 17:33:46 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:46.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.411 --rc genhtml_branch_coverage=1 00:06:46.411 --rc genhtml_function_coverage=1 00:06:46.411 --rc genhtml_legend=1 00:06:46.411 --rc geninfo_all_blocks=1 00:06:46.411 --rc geninfo_unexecuted_blocks=1 00:06:46.411 00:06:46.411 ' 00:06:46.411 17:33:46 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:46.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.411 --rc genhtml_branch_coverage=1 00:06:46.411 --rc genhtml_function_coverage=1 00:06:46.411 --rc genhtml_legend=1 00:06:46.411 --rc geninfo_all_blocks=1 00:06:46.411 --rc geninfo_unexecuted_blocks=1 00:06:46.411 00:06:46.411 ' 00:06:46.411 17:33:46 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:46.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.411 --rc genhtml_branch_coverage=1 00:06:46.411 --rc genhtml_function_coverage=1 00:06:46.411 --rc genhtml_legend=1 00:06:46.411 --rc geninfo_all_blocks=1 00:06:46.411 --rc geninfo_unexecuted_blocks=1 00:06:46.411 00:06:46.411 ' 00:06:46.411 17:33:46 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:46.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.411 --rc genhtml_branch_coverage=1 00:06:46.411 --rc genhtml_function_coverage=1 00:06:46.411 --rc genhtml_legend=1 00:06:46.411 --rc geninfo_all_blocks=1 00:06:46.411 --rc geninfo_unexecuted_blocks=1 00:06:46.411 00:06:46.411 ' 00:06:46.411 17:33:46 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.411 17:33:46 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.411 17:33:46 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.411 17:33:46 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.411 17:33:46 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.411 17:33:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.411 17:33:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.411 17:33:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.411 17:33:46 json_config -- paths/export.sh@5 -- # export PATH 00:06:46.411 17:33:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@51 -- # : 0 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:46.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:46.411 17:33:46 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:46.412 INFO: JSON configuration test init 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:46.412 17:33:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:46.412 17:33:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:46.412 17:33:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:46.412 17:33:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.412 17:33:46 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:46.412 17:33:46 json_config -- json_config/common.sh@9 -- # local app=target 00:06:46.412 17:33:46 json_config -- json_config/common.sh@10 -- # shift 00:06:46.412 17:33:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:46.412 17:33:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:46.412 17:33:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:46.412 17:33:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:46.412 17:33:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:46.412 17:33:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2422157 00:06:46.412 17:33:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:46.412 Waiting for target to run... 00:06:46.412 17:33:46 json_config -- json_config/common.sh@25 -- # waitforlisten 2422157 /var/tmp/spdk_tgt.sock 00:06:46.412 17:33:46 json_config -- common/autotest_common.sh@831 -- # '[' -z 2422157 ']' 00:06:46.412 17:33:46 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:46.412 17:33:46 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.412 17:33:46 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:46.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:46.412 17:33:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:46.412 17:33:46 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.412 17:33:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.672 [2024-11-20 17:33:46.349822] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:46.672 [2024-11-20 17:33:46.349879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2422157 ] 00:06:46.672 [2024-11-20 17:33:46.559591] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.672 [2024-11-20 17:33:46.574547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.242 17:33:47 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.243 17:33:47 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:47.243 17:33:47 json_config -- json_config/common.sh@26 -- # echo '' 00:06:47.243 00:06:47.243 17:33:47 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:47.243 17:33:47 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:47.243 17:33:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:47.243 17:33:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.243 17:33:47 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:47.243 17:33:47 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:47.243 17:33:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:47.243 17:33:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.503 17:33:47 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:47.503 17:33:47 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:47.503 17:33:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:48.073 17:33:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:48.073 17:33:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:48.073 17:33:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@54 -- # sort 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:48.073 17:33:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:48.073 17:33:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:48.073 17:33:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:48.073 17:33:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:48.073 17:33:47 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:48.073 17:33:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:48.334 MallocForNvmf0 00:06:48.334 17:33:48 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:48.334 17:33:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:48.595 MallocForNvmf1 00:06:48.595 17:33:48 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:48.595 17:33:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:48.595 [2024-11-20 17:33:48.461247] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.595 17:33:48 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:48.595 17:33:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:48.857 17:33:48 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:48.857 17:33:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:49.117 17:33:48 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:49.117 17:33:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:49.378 17:33:49 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:49.378 17:33:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:49.378 [2024-11-20 17:33:49.179431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:49.378 17:33:49 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:49.378 17:33:49 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:49.378 17:33:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:49.378 17:33:49 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:49.378 17:33:49 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:49.378 17:33:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:49.378 17:33:49 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:49.378 17:33:49 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:49.378 17:33:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:49.639 MallocBdevForConfigChangeCheck 00:06:49.639 17:33:49 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:49.639 17:33:49 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:49.639 17:33:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:49.639 17:33:49 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:49.639 17:33:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:49.899 17:33:49 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:49.899 INFO: shutting down applications... 00:06:49.899 17:33:49 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:49.899 17:33:49 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:49.899 17:33:49 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:49.899 17:33:49 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:50.470 Calling clear_iscsi_subsystem 00:06:50.470 Calling clear_nvmf_subsystem 00:06:50.470 Calling clear_nbd_subsystem 00:06:50.470 Calling clear_ublk_subsystem 00:06:50.470 Calling clear_vhost_blk_subsystem 00:06:50.470 Calling clear_vhost_scsi_subsystem 00:06:50.470 Calling clear_bdev_subsystem 00:06:50.470 17:33:50 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:50.470 17:33:50 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:50.470 17:33:50 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:50.470 17:33:50 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:50.470 17:33:50 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:50.470 17:33:50 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:50.729 17:33:50 json_config -- json_config/json_config.sh@352 -- # break 00:06:50.729 17:33:50 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:50.729 17:33:50 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:50.729 17:33:50 json_config -- json_config/common.sh@31 -- # local app=target 00:06:50.729 17:33:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:50.729 17:33:50 json_config -- json_config/common.sh@35 -- # [[ -n 2422157 ]] 00:06:50.729 17:33:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2422157 00:06:50.729 17:33:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:50.729 17:33:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:50.729 17:33:50 json_config -- json_config/common.sh@41 -- # kill -0 2422157 00:06:50.729 17:33:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:51.299 17:33:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:51.299 17:33:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:51.299 17:33:51 json_config -- json_config/common.sh@41 -- # kill -0 2422157 00:06:51.299 17:33:51 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:51.299 17:33:51 json_config -- json_config/common.sh@43 -- # break 00:06:51.299 17:33:51 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:51.299 17:33:51 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:51.299 SPDK target shutdown done 00:06:51.299 17:33:51 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:51.299 INFO: relaunching applications... 00:06:51.299 17:33:51 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:51.299 17:33:51 json_config -- json_config/common.sh@9 -- # local app=target 00:06:51.299 17:33:51 json_config -- json_config/common.sh@10 -- # shift 00:06:51.299 17:33:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:51.299 17:33:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:51.299 17:33:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:51.299 17:33:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:51.299 17:33:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:51.299 17:33:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:51.299 17:33:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2423294 00:06:51.299 17:33:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:51.299 Waiting for target to run... 00:06:51.299 17:33:51 json_config -- json_config/common.sh@25 -- # waitforlisten 2423294 /var/tmp/spdk_tgt.sock 00:06:51.300 17:33:51 json_config -- common/autotest_common.sh@831 -- # '[' -z 2423294 ']' 00:06:51.300 17:33:51 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:51.300 17:33:51 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.300 17:33:51 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:51.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:51.300 17:33:51 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.300 17:33:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:51.300 [2024-11-20 17:33:51.168226] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:51.300 [2024-11-20 17:33:51.168269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2423294 ] 00:06:51.871 [2024-11-20 17:33:51.482253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.871 [2024-11-20 17:33:51.511423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.131 [2024-11-20 17:33:51.988609] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.131 [2024-11-20 17:33:52.020969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:52.391 17:33:52 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.391 17:33:52 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:52.391 17:33:52 json_config -- json_config/common.sh@26 -- # echo '' 00:06:52.391 00:06:52.391 17:33:52 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:52.391 17:33:52 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:52.391 INFO: Checking if target configuration is the same... 00:06:52.391 17:33:52 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:52.391 17:33:52 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:52.391 17:33:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:52.391 + '[' 2 -ne 2 ']' 00:06:52.391 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:52.391 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:52.391 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:52.391 +++ basename /dev/fd/62 00:06:52.391 ++ mktemp /tmp/62.XXX 00:06:52.391 + tmp_file_1=/tmp/62.JWH 00:06:52.391 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:52.391 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:52.391 + tmp_file_2=/tmp/spdk_tgt_config.json.SpU 00:06:52.391 + ret=0 00:06:52.391 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:52.651 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:52.651 + diff -u /tmp/62.JWH /tmp/spdk_tgt_config.json.SpU 00:06:52.651 + echo 'INFO: JSON config files are the same' 00:06:52.651 INFO: JSON config files are the same 00:06:52.651 + rm /tmp/62.JWH /tmp/spdk_tgt_config.json.SpU 00:06:52.651 + exit 0 00:06:52.651 17:33:52 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:52.651 17:33:52 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:52.651 INFO: changing configuration and checking if this can be detected... 00:06:52.651 17:33:52 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:52.651 17:33:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:52.912 17:33:52 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:52.912 17:33:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:52.912 17:33:52 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:52.912 + '[' 2 -ne 2 ']' 00:06:52.912 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:52.912 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:52.912 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:52.912 +++ basename /dev/fd/62 00:06:52.912 ++ mktemp /tmp/62.XXX 00:06:52.912 + tmp_file_1=/tmp/62.ulX 00:06:52.912 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:52.912 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:52.912 + tmp_file_2=/tmp/spdk_tgt_config.json.hZz 00:06:52.912 + ret=0 00:06:52.912 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:53.173 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:53.173 + diff -u /tmp/62.ulX /tmp/spdk_tgt_config.json.hZz 00:06:53.173 + ret=1 00:06:53.173 + echo '=== Start of file: /tmp/62.ulX ===' 00:06:53.173 + cat /tmp/62.ulX 00:06:53.173 + echo '=== End of file: /tmp/62.ulX ===' 00:06:53.173 + echo '' 00:06:53.173 + echo '=== Start of file: /tmp/spdk_tgt_config.json.hZz ===' 00:06:53.173 + cat /tmp/spdk_tgt_config.json.hZz 00:06:53.173 + echo '=== End of file: /tmp/spdk_tgt_config.json.hZz ===' 00:06:53.173 + echo '' 00:06:53.173 + rm /tmp/62.ulX /tmp/spdk_tgt_config.json.hZz 00:06:53.173 + exit 1 00:06:53.173 17:33:52 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:53.173 INFO: configuration change detected. 00:06:53.173 17:33:52 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:53.173 17:33:52 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:53.173 17:33:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:53.173 17:33:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.173 17:33:53 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:53.173 17:33:53 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:53.173 17:33:53 json_config -- json_config/json_config.sh@324 -- # [[ -n 2423294 ]] 00:06:53.173 17:33:53 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:53.173 17:33:53 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:53.173 17:33:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:53.173 17:33:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.173 17:33:53 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:53.173 17:33:53 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:53.173 17:33:53 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:53.173 17:33:53 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:53.173 17:33:53 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:53.173 17:33:53 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:53.173 17:33:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:53.173 17:33:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.173 17:33:53 json_config -- json_config/json_config.sh@330 -- # killprocess 2423294 00:06:53.173 17:33:53 json_config -- common/autotest_common.sh@950 -- # '[' -z 2423294 ']' 00:06:53.173 17:33:53 json_config -- common/autotest_common.sh@954 -- # kill -0 2423294 00:06:53.173 17:33:53 json_config -- common/autotest_common.sh@955 -- # uname 00:06:53.173 17:33:53 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.173 17:33:53 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2423294 00:06:53.433 17:33:53 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.434 17:33:53 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.434 17:33:53 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2423294' 00:06:53.434 killing process with pid 2423294 00:06:53.434 17:33:53 json_config -- common/autotest_common.sh@969 -- # kill 2423294 00:06:53.434 17:33:53 json_config -- common/autotest_common.sh@974 -- # wait 2423294 00:06:53.695 17:33:53 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:53.695 17:33:53 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:53.695 17:33:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:53.695 17:33:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.695 17:33:53 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:53.695 17:33:53 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:53.695 INFO: Success 00:06:53.695 00:06:53.695 real 0m7.369s 00:06:53.695 user 0m9.304s 00:06:53.695 sys 0m1.650s 00:06:53.695 17:33:53 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.695 17:33:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.695 ************************************ 00:06:53.695 END TEST json_config 00:06:53.695 ************************************ 00:06:53.695 17:33:53 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:53.695 17:33:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.695 17:33:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.695 17:33:53 -- common/autotest_common.sh@10 -- # set +x 00:06:53.695 ************************************ 00:06:53.695 START TEST json_config_extra_key 00:06:53.695 ************************************ 00:06:53.695 17:33:53 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:53.695 17:33:53 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:53.695 17:33:53 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:53.695 17:33:53 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:53.957 17:33:53 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:53.957 17:33:53 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.957 17:33:53 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:53.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.957 --rc genhtml_branch_coverage=1 00:06:53.957 --rc genhtml_function_coverage=1 00:06:53.957 --rc genhtml_legend=1 00:06:53.957 --rc geninfo_all_blocks=1 00:06:53.957 --rc geninfo_unexecuted_blocks=1 00:06:53.957 00:06:53.957 ' 00:06:53.957 17:33:53 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:53.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.957 --rc genhtml_branch_coverage=1 00:06:53.957 --rc genhtml_function_coverage=1 00:06:53.957 --rc genhtml_legend=1 00:06:53.957 --rc geninfo_all_blocks=1 00:06:53.957 --rc geninfo_unexecuted_blocks=1 00:06:53.957 00:06:53.957 ' 00:06:53.957 17:33:53 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:53.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.957 --rc genhtml_branch_coverage=1 00:06:53.957 --rc genhtml_function_coverage=1 00:06:53.957 --rc genhtml_legend=1 00:06:53.957 --rc geninfo_all_blocks=1 00:06:53.957 --rc geninfo_unexecuted_blocks=1 00:06:53.957 00:06:53.957 ' 00:06:53.957 17:33:53 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:53.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.957 --rc genhtml_branch_coverage=1 00:06:53.957 --rc genhtml_function_coverage=1 00:06:53.957 --rc genhtml_legend=1 00:06:53.957 --rc geninfo_all_blocks=1 00:06:53.957 --rc geninfo_unexecuted_blocks=1 00:06:53.957 00:06:53.957 ' 00:06:53.957 17:33:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.957 17:33:53 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.957 17:33:53 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.958 17:33:53 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.958 17:33:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.958 17:33:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.958 17:33:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.958 17:33:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:53.958 17:33:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.958 17:33:53 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:53.958 17:33:53 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:53.958 17:33:53 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:53.958 17:33:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.958 17:33:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.958 17:33:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.958 17:33:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:53.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:53.958 17:33:53 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:53.958 17:33:53 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:53.958 17:33:53 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:53.958 17:33:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:53.958 17:33:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:53.958 17:33:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:53.958 17:33:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:53.958 17:33:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:53.958 17:33:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:53.958 17:33:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:53.958 17:33:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:53.958 17:33:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:53.958 17:33:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:53.958 17:33:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:53.958 INFO: launching applications... 00:06:53.958 17:33:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:53.958 17:33:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:53.958 17:33:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:53.958 17:33:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:53.958 17:33:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:53.958 17:33:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:53.958 17:33:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.958 17:33:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.958 17:33:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2424075 00:06:53.958 17:33:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:53.958 Waiting for target to run... 00:06:53.958 17:33:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2424075 /var/tmp/spdk_tgt.sock 00:06:53.958 17:33:53 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2424075 ']' 00:06:53.958 17:33:53 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:53.958 17:33:53 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:53.958 17:33:53 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.958 17:33:53 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:53.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:53.958 17:33:53 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.958 17:33:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:53.958 [2024-11-20 17:33:53.782312] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:53.958 [2024-11-20 17:33:53.782385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2424075 ] 00:06:54.218 [2024-11-20 17:33:54.057000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.219 [2024-11-20 17:33:54.075695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.789 17:33:54 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.789 17:33:54 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:54.789 17:33:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:54.789 00:06:54.789 17:33:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:54.789 INFO: shutting down applications... 00:06:54.789 17:33:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:54.789 17:33:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:54.789 17:33:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:54.789 17:33:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2424075 ]] 00:06:54.789 17:33:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2424075 00:06:54.789 17:33:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:54.789 17:33:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:54.789 17:33:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2424075 00:06:54.789 17:33:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:55.360 17:33:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:55.360 17:33:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:55.360 17:33:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2424075 00:06:55.360 17:33:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:55.360 17:33:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:55.360 17:33:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:55.360 17:33:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:55.360 SPDK target shutdown done 00:06:55.360 17:33:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:55.360 Success 00:06:55.360 00:06:55.360 real 0m1.558s 00:06:55.360 user 0m1.154s 00:06:55.360 sys 0m0.417s 00:06:55.360 17:33:55 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.360 17:33:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:55.360 ************************************ 00:06:55.360 END TEST json_config_extra_key 00:06:55.360 ************************************ 00:06:55.360 17:33:55 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:55.360 17:33:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.360 17:33:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.360 17:33:55 -- common/autotest_common.sh@10 -- # set +x 00:06:55.360 ************************************ 00:06:55.360 START TEST alias_rpc 00:06:55.360 ************************************ 00:06:55.360 17:33:55 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:55.360 * Looking for test storage... 00:06:55.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:55.360 17:33:55 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:55.360 17:33:55 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:55.360 17:33:55 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.621 17:33:55 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.621 17:33:55 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:55.621 17:33:55 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.621 17:33:55 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.621 --rc genhtml_branch_coverage=1 00:06:55.621 --rc genhtml_function_coverage=1 00:06:55.621 --rc genhtml_legend=1 00:06:55.621 --rc geninfo_all_blocks=1 00:06:55.622 --rc geninfo_unexecuted_blocks=1 00:06:55.622 00:06:55.622 ' 00:06:55.622 17:33:55 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.622 --rc genhtml_branch_coverage=1 00:06:55.622 --rc genhtml_function_coverage=1 00:06:55.622 --rc genhtml_legend=1 00:06:55.622 --rc geninfo_all_blocks=1 00:06:55.622 --rc geninfo_unexecuted_blocks=1 00:06:55.622 00:06:55.622 ' 00:06:55.622 17:33:55 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.622 --rc genhtml_branch_coverage=1 00:06:55.622 --rc genhtml_function_coverage=1 00:06:55.622 --rc genhtml_legend=1 00:06:55.622 --rc geninfo_all_blocks=1 00:06:55.622 --rc geninfo_unexecuted_blocks=1 00:06:55.622 00:06:55.622 ' 00:06:55.622 17:33:55 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.622 --rc genhtml_branch_coverage=1 00:06:55.622 --rc genhtml_function_coverage=1 00:06:55.622 --rc genhtml_legend=1 00:06:55.622 --rc geninfo_all_blocks=1 00:06:55.622 --rc geninfo_unexecuted_blocks=1 00:06:55.622 00:06:55.622 ' 00:06:55.622 17:33:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:55.622 17:33:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2424468 00:06:55.622 17:33:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2424468 00:06:55.622 17:33:55 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2424468 ']' 00:06:55.622 17:33:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:55.622 17:33:55 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.622 17:33:55 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.622 17:33:55 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.622 17:33:55 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.622 17:33:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.622 [2024-11-20 17:33:55.410708] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:55.622 [2024-11-20 17:33:55.410779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2424468 ] 00:06:55.622 [2024-11-20 17:33:55.491772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.622 [2024-11-20 17:33:55.532644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.563 17:33:56 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.563 17:33:56 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:56.563 17:33:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:56.563 17:33:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2424468 00:06:56.563 17:33:56 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2424468 ']' 00:06:56.563 17:33:56 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2424468 00:06:56.563 17:33:56 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:56.563 17:33:56 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.563 17:33:56 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2424468 00:06:56.563 17:33:56 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.563 17:33:56 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.563 17:33:56 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2424468' 00:06:56.563 killing process with pid 2424468 00:06:56.563 17:33:56 alias_rpc -- common/autotest_common.sh@969 -- # kill 2424468 00:06:56.563 17:33:56 alias_rpc -- common/autotest_common.sh@974 -- # wait 2424468 00:06:56.824 00:06:56.824 real 0m1.504s 00:06:56.824 user 0m1.622s 00:06:56.824 sys 0m0.442s 00:06:56.824 17:33:56 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.824 17:33:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.824 ************************************ 00:06:56.824 END TEST alias_rpc 00:06:56.824 ************************************ 00:06:56.824 17:33:56 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:56.824 17:33:56 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:56.824 17:33:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.824 17:33:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.824 17:33:56 -- common/autotest_common.sh@10 -- # set +x 00:06:56.824 ************************************ 00:06:56.824 START TEST spdkcli_tcp 00:06:56.824 ************************************ 00:06:56.824 17:33:56 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:57.085 * Looking for test storage... 00:06:57.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:57.085 17:33:56 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:57.085 17:33:56 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:57.085 17:33:56 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:57.085 17:33:56 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:57.085 17:33:56 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.085 17:33:56 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.085 17:33:56 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.085 17:33:56 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.085 17:33:56 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.085 17:33:56 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.085 17:33:56 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.085 17:33:56 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.086 17:33:56 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:57.086 17:33:56 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.086 17:33:56 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:57.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.086 --rc genhtml_branch_coverage=1 00:06:57.086 --rc genhtml_function_coverage=1 00:06:57.086 --rc genhtml_legend=1 00:06:57.086 --rc geninfo_all_blocks=1 00:06:57.086 --rc geninfo_unexecuted_blocks=1 00:06:57.086 00:06:57.086 ' 00:06:57.086 17:33:56 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:57.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.086 --rc genhtml_branch_coverage=1 00:06:57.086 --rc genhtml_function_coverage=1 00:06:57.086 --rc genhtml_legend=1 00:06:57.086 --rc geninfo_all_blocks=1 00:06:57.086 --rc geninfo_unexecuted_blocks=1 00:06:57.086 00:06:57.086 ' 00:06:57.086 17:33:56 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:57.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.086 --rc genhtml_branch_coverage=1 00:06:57.086 --rc genhtml_function_coverage=1 00:06:57.086 --rc genhtml_legend=1 00:06:57.086 --rc geninfo_all_blocks=1 00:06:57.086 --rc geninfo_unexecuted_blocks=1 00:06:57.086 00:06:57.086 ' 00:06:57.086 17:33:56 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:57.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.086 --rc genhtml_branch_coverage=1 00:06:57.086 --rc genhtml_function_coverage=1 00:06:57.086 --rc genhtml_legend=1 00:06:57.086 --rc geninfo_all_blocks=1 00:06:57.086 --rc geninfo_unexecuted_blocks=1 00:06:57.086 00:06:57.086 ' 00:06:57.086 17:33:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:57.086 17:33:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:57.086 17:33:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:57.086 17:33:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:57.086 17:33:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:57.086 17:33:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:57.086 17:33:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:57.086 17:33:56 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:57.086 17:33:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.086 17:33:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2424816 00:06:57.086 17:33:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2424816 00:06:57.086 17:33:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:57.086 17:33:56 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2424816 ']' 00:06:57.086 17:33:56 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.086 17:33:56 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.086 17:33:56 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.086 17:33:56 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.086 17:33:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.086 [2024-11-20 17:33:56.998692] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:57.086 [2024-11-20 17:33:56.998768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2424816 ] 00:06:57.346 [2024-11-20 17:33:57.078536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.346 [2024-11-20 17:33:57.119346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.347 [2024-11-20 17:33:57.119514] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.916 17:33:57 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.916 17:33:57 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:57.916 17:33:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2424885 00:06:57.917 17:33:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:57.917 17:33:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:58.177 [ 00:06:58.177 "bdev_malloc_delete", 00:06:58.177 "bdev_malloc_create", 00:06:58.177 "bdev_null_resize", 00:06:58.177 "bdev_null_delete", 00:06:58.177 "bdev_null_create", 00:06:58.177 "bdev_nvme_cuse_unregister", 00:06:58.177 "bdev_nvme_cuse_register", 00:06:58.177 "bdev_opal_new_user", 00:06:58.177 "bdev_opal_set_lock_state", 00:06:58.177 "bdev_opal_delete", 00:06:58.177 "bdev_opal_get_info", 00:06:58.177 "bdev_opal_create", 00:06:58.177 "bdev_nvme_opal_revert", 00:06:58.177 "bdev_nvme_opal_init", 00:06:58.177 "bdev_nvme_send_cmd", 00:06:58.177 "bdev_nvme_set_keys", 00:06:58.177 "bdev_nvme_get_path_iostat", 00:06:58.177 "bdev_nvme_get_mdns_discovery_info", 00:06:58.178 "bdev_nvme_stop_mdns_discovery", 00:06:58.178 "bdev_nvme_start_mdns_discovery", 00:06:58.178 "bdev_nvme_set_multipath_policy", 00:06:58.178 "bdev_nvme_set_preferred_path", 00:06:58.178 "bdev_nvme_get_io_paths", 00:06:58.178 "bdev_nvme_remove_error_injection", 00:06:58.178 "bdev_nvme_add_error_injection", 00:06:58.178 "bdev_nvme_get_discovery_info", 00:06:58.178 "bdev_nvme_stop_discovery", 00:06:58.178 "bdev_nvme_start_discovery", 00:06:58.178 "bdev_nvme_get_controller_health_info", 00:06:58.178 "bdev_nvme_disable_controller", 00:06:58.178 "bdev_nvme_enable_controller", 00:06:58.178 "bdev_nvme_reset_controller", 00:06:58.178 "bdev_nvme_get_transport_statistics", 00:06:58.178 "bdev_nvme_apply_firmware", 00:06:58.178 "bdev_nvme_detach_controller", 00:06:58.178 "bdev_nvme_get_controllers", 00:06:58.178 "bdev_nvme_attach_controller", 00:06:58.178 "bdev_nvme_set_hotplug", 00:06:58.178 "bdev_nvme_set_options", 00:06:58.178 "bdev_passthru_delete", 00:06:58.178 "bdev_passthru_create", 00:06:58.178 "bdev_lvol_set_parent_bdev", 00:06:58.178 "bdev_lvol_set_parent", 00:06:58.178 "bdev_lvol_check_shallow_copy", 00:06:58.178 "bdev_lvol_start_shallow_copy", 00:06:58.178 "bdev_lvol_grow_lvstore", 00:06:58.178 "bdev_lvol_get_lvols", 00:06:58.178 "bdev_lvol_get_lvstores", 00:06:58.178 "bdev_lvol_delete", 00:06:58.178 "bdev_lvol_set_read_only", 00:06:58.178 "bdev_lvol_resize", 00:06:58.178 "bdev_lvol_decouple_parent", 00:06:58.178 "bdev_lvol_inflate", 00:06:58.178 "bdev_lvol_rename", 00:06:58.178 "bdev_lvol_clone_bdev", 00:06:58.178 "bdev_lvol_clone", 00:06:58.178 "bdev_lvol_snapshot", 00:06:58.178 "bdev_lvol_create", 00:06:58.178 "bdev_lvol_delete_lvstore", 00:06:58.178 "bdev_lvol_rename_lvstore", 00:06:58.178 "bdev_lvol_create_lvstore", 00:06:58.178 "bdev_raid_set_options", 00:06:58.178 "bdev_raid_remove_base_bdev", 00:06:58.178 "bdev_raid_add_base_bdev", 00:06:58.178 "bdev_raid_delete", 00:06:58.178 "bdev_raid_create", 00:06:58.178 "bdev_raid_get_bdevs", 00:06:58.178 "bdev_error_inject_error", 00:06:58.178 "bdev_error_delete", 00:06:58.178 "bdev_error_create", 00:06:58.178 "bdev_split_delete", 00:06:58.178 "bdev_split_create", 00:06:58.178 "bdev_delay_delete", 00:06:58.178 "bdev_delay_create", 00:06:58.178 "bdev_delay_update_latency", 00:06:58.178 "bdev_zone_block_delete", 00:06:58.178 "bdev_zone_block_create", 00:06:58.178 "blobfs_create", 00:06:58.178 "blobfs_detect", 00:06:58.178 "blobfs_set_cache_size", 00:06:58.178 "bdev_aio_delete", 00:06:58.178 "bdev_aio_rescan", 00:06:58.178 "bdev_aio_create", 00:06:58.178 "bdev_ftl_set_property", 00:06:58.178 "bdev_ftl_get_properties", 00:06:58.178 "bdev_ftl_get_stats", 00:06:58.178 "bdev_ftl_unmap", 00:06:58.178 "bdev_ftl_unload", 00:06:58.178 "bdev_ftl_delete", 00:06:58.178 "bdev_ftl_load", 00:06:58.178 "bdev_ftl_create", 00:06:58.178 "bdev_virtio_attach_controller", 00:06:58.178 "bdev_virtio_scsi_get_devices", 00:06:58.178 "bdev_virtio_detach_controller", 00:06:58.178 "bdev_virtio_blk_set_hotplug", 00:06:58.178 "bdev_iscsi_delete", 00:06:58.178 "bdev_iscsi_create", 00:06:58.178 "bdev_iscsi_set_options", 00:06:58.178 "accel_error_inject_error", 00:06:58.178 "ioat_scan_accel_module", 00:06:58.178 "dsa_scan_accel_module", 00:06:58.178 "iaa_scan_accel_module", 00:06:58.178 "vfu_virtio_create_fs_endpoint", 00:06:58.178 "vfu_virtio_create_scsi_endpoint", 00:06:58.178 "vfu_virtio_scsi_remove_target", 00:06:58.178 "vfu_virtio_scsi_add_target", 00:06:58.178 "vfu_virtio_create_blk_endpoint", 00:06:58.178 "vfu_virtio_delete_endpoint", 00:06:58.178 "keyring_file_remove_key", 00:06:58.178 "keyring_file_add_key", 00:06:58.178 "keyring_linux_set_options", 00:06:58.178 "fsdev_aio_delete", 00:06:58.178 "fsdev_aio_create", 00:06:58.178 "iscsi_get_histogram", 00:06:58.178 "iscsi_enable_histogram", 00:06:58.178 "iscsi_set_options", 00:06:58.178 "iscsi_get_auth_groups", 00:06:58.178 "iscsi_auth_group_remove_secret", 00:06:58.178 "iscsi_auth_group_add_secret", 00:06:58.178 "iscsi_delete_auth_group", 00:06:58.178 "iscsi_create_auth_group", 00:06:58.178 "iscsi_set_discovery_auth", 00:06:58.178 "iscsi_get_options", 00:06:58.178 "iscsi_target_node_request_logout", 00:06:58.178 "iscsi_target_node_set_redirect", 00:06:58.178 "iscsi_target_node_set_auth", 00:06:58.178 "iscsi_target_node_add_lun", 00:06:58.178 "iscsi_get_stats", 00:06:58.178 "iscsi_get_connections", 00:06:58.178 "iscsi_portal_group_set_auth", 00:06:58.178 "iscsi_start_portal_group", 00:06:58.178 "iscsi_delete_portal_group", 00:06:58.178 "iscsi_create_portal_group", 00:06:58.178 "iscsi_get_portal_groups", 00:06:58.178 "iscsi_delete_target_node", 00:06:58.178 "iscsi_target_node_remove_pg_ig_maps", 00:06:58.178 "iscsi_target_node_add_pg_ig_maps", 00:06:58.178 "iscsi_create_target_node", 00:06:58.178 "iscsi_get_target_nodes", 00:06:58.178 "iscsi_delete_initiator_group", 00:06:58.178 "iscsi_initiator_group_remove_initiators", 00:06:58.178 "iscsi_initiator_group_add_initiators", 00:06:58.178 "iscsi_create_initiator_group", 00:06:58.178 "iscsi_get_initiator_groups", 00:06:58.178 "nvmf_set_crdt", 00:06:58.178 "nvmf_set_config", 00:06:58.178 "nvmf_set_max_subsystems", 00:06:58.178 "nvmf_stop_mdns_prr", 00:06:58.178 "nvmf_publish_mdns_prr", 00:06:58.178 "nvmf_subsystem_get_listeners", 00:06:58.178 "nvmf_subsystem_get_qpairs", 00:06:58.178 "nvmf_subsystem_get_controllers", 00:06:58.178 "nvmf_get_stats", 00:06:58.178 "nvmf_get_transports", 00:06:58.178 "nvmf_create_transport", 00:06:58.178 "nvmf_get_targets", 00:06:58.178 "nvmf_delete_target", 00:06:58.178 "nvmf_create_target", 00:06:58.178 "nvmf_subsystem_allow_any_host", 00:06:58.178 "nvmf_subsystem_set_keys", 00:06:58.178 "nvmf_subsystem_remove_host", 00:06:58.178 "nvmf_subsystem_add_host", 00:06:58.178 "nvmf_ns_remove_host", 00:06:58.178 "nvmf_ns_add_host", 00:06:58.178 "nvmf_subsystem_remove_ns", 00:06:58.178 "nvmf_subsystem_set_ns_ana_group", 00:06:58.178 "nvmf_subsystem_add_ns", 00:06:58.178 "nvmf_subsystem_listener_set_ana_state", 00:06:58.178 "nvmf_discovery_get_referrals", 00:06:58.178 "nvmf_discovery_remove_referral", 00:06:58.178 "nvmf_discovery_add_referral", 00:06:58.178 "nvmf_subsystem_remove_listener", 00:06:58.178 "nvmf_subsystem_add_listener", 00:06:58.178 "nvmf_delete_subsystem", 00:06:58.178 "nvmf_create_subsystem", 00:06:58.178 "nvmf_get_subsystems", 00:06:58.178 "env_dpdk_get_mem_stats", 00:06:58.178 "nbd_get_disks", 00:06:58.178 "nbd_stop_disk", 00:06:58.178 "nbd_start_disk", 00:06:58.178 "ublk_recover_disk", 00:06:58.178 "ublk_get_disks", 00:06:58.178 "ublk_stop_disk", 00:06:58.178 "ublk_start_disk", 00:06:58.178 "ublk_destroy_target", 00:06:58.178 "ublk_create_target", 00:06:58.178 "virtio_blk_create_transport", 00:06:58.178 "virtio_blk_get_transports", 00:06:58.178 "vhost_controller_set_coalescing", 00:06:58.178 "vhost_get_controllers", 00:06:58.178 "vhost_delete_controller", 00:06:58.178 "vhost_create_blk_controller", 00:06:58.178 "vhost_scsi_controller_remove_target", 00:06:58.178 "vhost_scsi_controller_add_target", 00:06:58.178 "vhost_start_scsi_controller", 00:06:58.178 "vhost_create_scsi_controller", 00:06:58.178 "thread_set_cpumask", 00:06:58.178 "scheduler_set_options", 00:06:58.178 "framework_get_governor", 00:06:58.178 "framework_get_scheduler", 00:06:58.178 "framework_set_scheduler", 00:06:58.178 "framework_get_reactors", 00:06:58.178 "thread_get_io_channels", 00:06:58.178 "thread_get_pollers", 00:06:58.178 "thread_get_stats", 00:06:58.178 "framework_monitor_context_switch", 00:06:58.178 "spdk_kill_instance", 00:06:58.178 "log_enable_timestamps", 00:06:58.178 "log_get_flags", 00:06:58.179 "log_clear_flag", 00:06:58.179 "log_set_flag", 00:06:58.179 "log_get_level", 00:06:58.179 "log_set_level", 00:06:58.179 "log_get_print_level", 00:06:58.179 "log_set_print_level", 00:06:58.179 "framework_enable_cpumask_locks", 00:06:58.179 "framework_disable_cpumask_locks", 00:06:58.179 "framework_wait_init", 00:06:58.179 "framework_start_init", 00:06:58.179 "scsi_get_devices", 00:06:58.179 "bdev_get_histogram", 00:06:58.179 "bdev_enable_histogram", 00:06:58.179 "bdev_set_qos_limit", 00:06:58.179 "bdev_set_qd_sampling_period", 00:06:58.179 "bdev_get_bdevs", 00:06:58.179 "bdev_reset_iostat", 00:06:58.179 "bdev_get_iostat", 00:06:58.179 "bdev_examine", 00:06:58.179 "bdev_wait_for_examine", 00:06:58.179 "bdev_set_options", 00:06:58.179 "accel_get_stats", 00:06:58.179 "accel_set_options", 00:06:58.179 "accel_set_driver", 00:06:58.179 "accel_crypto_key_destroy", 00:06:58.179 "accel_crypto_keys_get", 00:06:58.179 "accel_crypto_key_create", 00:06:58.179 "accel_assign_opc", 00:06:58.179 "accel_get_module_info", 00:06:58.179 "accel_get_opc_assignments", 00:06:58.179 "vmd_rescan", 00:06:58.179 "vmd_remove_device", 00:06:58.179 "vmd_enable", 00:06:58.179 "sock_get_default_impl", 00:06:58.179 "sock_set_default_impl", 00:06:58.179 "sock_impl_set_options", 00:06:58.179 "sock_impl_get_options", 00:06:58.179 "iobuf_get_stats", 00:06:58.179 "iobuf_set_options", 00:06:58.179 "keyring_get_keys", 00:06:58.179 "vfu_tgt_set_base_path", 00:06:58.179 "framework_get_pci_devices", 00:06:58.179 "framework_get_config", 00:06:58.179 "framework_get_subsystems", 00:06:58.179 "fsdev_set_opts", 00:06:58.179 "fsdev_get_opts", 00:06:58.179 "trace_get_info", 00:06:58.179 "trace_get_tpoint_group_mask", 00:06:58.179 "trace_disable_tpoint_group", 00:06:58.179 "trace_enable_tpoint_group", 00:06:58.179 "trace_clear_tpoint_mask", 00:06:58.179 "trace_set_tpoint_mask", 00:06:58.179 "notify_get_notifications", 00:06:58.179 "notify_get_types", 00:06:58.179 "spdk_get_version", 00:06:58.179 "rpc_get_methods" 00:06:58.179 ] 00:06:58.179 17:33:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:58.179 17:33:57 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:58.179 17:33:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.179 17:33:58 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:58.179 17:33:58 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2424816 00:06:58.179 17:33:58 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2424816 ']' 00:06:58.179 17:33:58 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2424816 00:06:58.179 17:33:58 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:58.179 17:33:58 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.179 17:33:58 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2424816 00:06:58.179 17:33:58 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.179 17:33:58 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.179 17:33:58 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2424816' 00:06:58.179 killing process with pid 2424816 00:06:58.179 17:33:58 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2424816 00:06:58.179 17:33:58 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2424816 00:06:58.440 00:06:58.440 real 0m1.542s 00:06:58.440 user 0m2.782s 00:06:58.440 sys 0m0.492s 00:06:58.440 17:33:58 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.440 17:33:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.440 ************************************ 00:06:58.440 END TEST spdkcli_tcp 00:06:58.440 ************************************ 00:06:58.440 17:33:58 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:58.440 17:33:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.440 17:33:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.440 17:33:58 -- common/autotest_common.sh@10 -- # set +x 00:06:58.440 ************************************ 00:06:58.440 START TEST dpdk_mem_utility 00:06:58.440 ************************************ 00:06:58.440 17:33:58 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:58.701 * Looking for test storage... 00:06:58.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:58.701 17:33:58 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:58.701 17:33:58 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:58.701 17:33:58 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:58.701 17:33:58 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.701 17:33:58 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:58.701 17:33:58 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.701 17:33:58 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:58.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.701 --rc genhtml_branch_coverage=1 00:06:58.701 --rc genhtml_function_coverage=1 00:06:58.701 --rc genhtml_legend=1 00:06:58.701 --rc geninfo_all_blocks=1 00:06:58.701 --rc geninfo_unexecuted_blocks=1 00:06:58.701 00:06:58.701 ' 00:06:58.701 17:33:58 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:58.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.701 --rc genhtml_branch_coverage=1 00:06:58.701 --rc genhtml_function_coverage=1 00:06:58.701 --rc genhtml_legend=1 00:06:58.701 --rc geninfo_all_blocks=1 00:06:58.701 --rc geninfo_unexecuted_blocks=1 00:06:58.701 00:06:58.701 ' 00:06:58.701 17:33:58 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:58.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.701 --rc genhtml_branch_coverage=1 00:06:58.701 --rc genhtml_function_coverage=1 00:06:58.701 --rc genhtml_legend=1 00:06:58.701 --rc geninfo_all_blocks=1 00:06:58.701 --rc geninfo_unexecuted_blocks=1 00:06:58.701 00:06:58.701 ' 00:06:58.701 17:33:58 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:58.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.701 --rc genhtml_branch_coverage=1 00:06:58.701 --rc genhtml_function_coverage=1 00:06:58.701 --rc genhtml_legend=1 00:06:58.701 --rc geninfo_all_blocks=1 00:06:58.701 --rc geninfo_unexecuted_blocks=1 00:06:58.701 00:06:58.701 ' 00:06:58.701 17:33:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:58.701 17:33:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2425177 00:06:58.701 17:33:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2425177 00:06:58.701 17:33:58 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2425177 ']' 00:06:58.701 17:33:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:58.701 17:33:58 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.702 17:33:58 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.702 17:33:58 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.702 17:33:58 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.702 17:33:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:58.702 [2024-11-20 17:33:58.608491] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:58.702 [2024-11-20 17:33:58.608570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2425177 ] 00:06:58.962 [2024-11-20 17:33:58.685807] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.962 [2024-11-20 17:33:58.719805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.531 17:33:59 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.531 17:33:59 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:59.531 17:33:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:59.531 17:33:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:59.531 17:33:59 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.531 17:33:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:59.531 { 00:06:59.531 "filename": "/tmp/spdk_mem_dump.txt" 00:06:59.531 } 00:06:59.531 17:33:59 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.531 17:33:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:59.531 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:59.531 1 heaps totaling size 860.000000 MiB 00:06:59.531 size: 860.000000 MiB heap id: 0 00:06:59.531 end heaps---------- 00:06:59.531 9 mempools totaling size 642.649841 MiB 00:06:59.531 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:59.531 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:59.531 size: 92.545471 MiB name: bdev_io_2425177 00:06:59.531 size: 51.011292 MiB name: evtpool_2425177 00:06:59.531 size: 50.003479 MiB name: msgpool_2425177 00:06:59.531 size: 36.509338 MiB name: fsdev_io_2425177 00:06:59.531 size: 21.763794 MiB name: PDU_Pool 00:06:59.531 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:59.531 size: 0.026123 MiB name: Session_Pool 00:06:59.531 end mempools------- 00:06:59.531 6 memzones totaling size 4.142822 MiB 00:06:59.531 size: 1.000366 MiB name: RG_ring_0_2425177 00:06:59.531 size: 1.000366 MiB name: RG_ring_1_2425177 00:06:59.531 size: 1.000366 MiB name: RG_ring_4_2425177 00:06:59.531 size: 1.000366 MiB name: RG_ring_5_2425177 00:06:59.531 size: 0.125366 MiB name: RG_ring_2_2425177 00:06:59.531 size: 0.015991 MiB name: RG_ring_3_2425177 00:06:59.531 end memzones------- 00:06:59.531 17:33:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:59.792 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:06:59.792 list of free elements. size: 13.984680 MiB 00:06:59.792 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:59.792 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:59.792 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:59.792 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:59.792 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:59.792 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:59.792 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:59.792 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:59.792 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:59.792 element at address: 0x20001d800000 with size: 0.582886 MiB 00:06:59.792 element at address: 0x200003e00000 with size: 0.495605 MiB 00:06:59.792 element at address: 0x20000d800000 with size: 0.490723 MiB 00:06:59.792 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:59.792 element at address: 0x200007000000 with size: 0.481934 MiB 00:06:59.792 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:06:59.792 element at address: 0x200003a00000 with size: 0.354858 MiB 00:06:59.792 list of standard malloc elements. size: 199.218628 MiB 00:06:59.792 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:59.792 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:59.792 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:59.792 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:59.792 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:59.792 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:59.792 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:59.792 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:59.792 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:59.792 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:59.792 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:59.792 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:59.792 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:59.792 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:59.792 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:59.792 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:59.792 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:06:59.792 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:59.792 element at address: 0x200003a5f240 with size: 0.000183 MiB 00:06:59.792 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:59.792 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:59.792 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:59.792 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:59.792 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:59.792 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:59.792 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:59.792 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:59.792 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:59.792 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:59.792 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:59.792 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:59.792 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:59.792 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:59.792 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:59.792 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:59.792 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:59.792 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:59.792 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:59.792 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:59.792 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:06:59.792 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:06:59.792 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:06:59.792 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:59.792 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:59.792 list of memzone associated elements. size: 646.796692 MiB 00:06:59.792 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:59.792 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:59.792 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:59.792 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:59.792 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:59.792 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2425177_0 00:06:59.792 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:59.792 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2425177_0 00:06:59.792 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:59.792 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2425177_0 00:06:59.792 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:59.792 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2425177_0 00:06:59.792 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:59.792 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:59.792 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:59.792 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:59.792 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:59.792 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2425177 00:06:59.792 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:59.792 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2425177 00:06:59.792 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:59.792 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2425177 00:06:59.792 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:59.792 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:59.792 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:59.792 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:59.792 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:59.792 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:59.792 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:59.792 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:59.792 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:59.792 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2425177 00:06:59.792 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:59.792 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2425177 00:06:59.792 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:59.792 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2425177 00:06:59.792 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:59.792 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2425177 00:06:59.792 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:59.792 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2425177 00:06:59.792 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:59.792 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2425177 00:06:59.792 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:59.792 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:59.792 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:59.792 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:59.792 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:59.792 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:59.792 element at address: 0x200003a5f300 with size: 0.125488 MiB 00:06:59.792 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2425177 00:06:59.792 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:59.792 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:59.792 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:06:59.792 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:59.792 element at address: 0x200003a5b040 with size: 0.016113 MiB 00:06:59.792 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2425177 00:06:59.792 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:06:59.792 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:59.792 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:59.792 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2425177 00:06:59.792 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:59.792 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2425177 00:06:59.792 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:06:59.792 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2425177 00:06:59.792 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:06:59.792 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:59.792 17:33:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:59.792 17:33:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2425177 00:06:59.792 17:33:59 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2425177 ']' 00:06:59.792 17:33:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2425177 00:06:59.792 17:33:59 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:59.792 17:33:59 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.792 17:33:59 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2425177 00:06:59.792 17:33:59 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.792 17:33:59 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.792 17:33:59 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2425177' 00:06:59.792 killing process with pid 2425177 00:06:59.792 17:33:59 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2425177 00:06:59.792 17:33:59 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2425177 00:07:00.054 00:07:00.054 real 0m1.406s 00:07:00.054 user 0m1.466s 00:07:00.054 sys 0m0.428s 00:07:00.054 17:33:59 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.054 17:33:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:00.054 ************************************ 00:07:00.054 END TEST dpdk_mem_utility 00:07:00.054 ************************************ 00:07:00.054 17:33:59 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:00.054 17:33:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.054 17:33:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.054 17:33:59 -- common/autotest_common.sh@10 -- # set +x 00:07:00.054 ************************************ 00:07:00.054 START TEST event 00:07:00.054 ************************************ 00:07:00.054 17:33:59 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:00.054 * Looking for test storage... 00:07:00.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:00.054 17:33:59 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:00.054 17:33:59 event -- common/autotest_common.sh@1681 -- # lcov --version 00:07:00.054 17:33:59 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:00.315 17:34:00 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:00.315 17:34:00 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.315 17:34:00 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.315 17:34:00 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.315 17:34:00 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.315 17:34:00 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.315 17:34:00 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.315 17:34:00 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.315 17:34:00 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.315 17:34:00 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.315 17:34:00 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.315 17:34:00 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.315 17:34:00 event -- scripts/common.sh@344 -- # case "$op" in 00:07:00.315 17:34:00 event -- scripts/common.sh@345 -- # : 1 00:07:00.315 17:34:00 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.315 17:34:00 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.315 17:34:00 event -- scripts/common.sh@365 -- # decimal 1 00:07:00.315 17:34:00 event -- scripts/common.sh@353 -- # local d=1 00:07:00.315 17:34:00 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.315 17:34:00 event -- scripts/common.sh@355 -- # echo 1 00:07:00.315 17:34:00 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.315 17:34:00 event -- scripts/common.sh@366 -- # decimal 2 00:07:00.315 17:34:00 event -- scripts/common.sh@353 -- # local d=2 00:07:00.315 17:34:00 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.315 17:34:00 event -- scripts/common.sh@355 -- # echo 2 00:07:00.315 17:34:00 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.315 17:34:00 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.315 17:34:00 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.315 17:34:00 event -- scripts/common.sh@368 -- # return 0 00:07:00.315 17:34:00 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.315 17:34:00 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:00.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.315 --rc genhtml_branch_coverage=1 00:07:00.315 --rc genhtml_function_coverage=1 00:07:00.315 --rc genhtml_legend=1 00:07:00.315 --rc geninfo_all_blocks=1 00:07:00.315 --rc geninfo_unexecuted_blocks=1 00:07:00.315 00:07:00.315 ' 00:07:00.315 17:34:00 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:00.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.315 --rc genhtml_branch_coverage=1 00:07:00.315 --rc genhtml_function_coverage=1 00:07:00.315 --rc genhtml_legend=1 00:07:00.315 --rc geninfo_all_blocks=1 00:07:00.315 --rc geninfo_unexecuted_blocks=1 00:07:00.315 00:07:00.315 ' 00:07:00.315 17:34:00 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:00.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.315 --rc genhtml_branch_coverage=1 00:07:00.315 --rc genhtml_function_coverage=1 00:07:00.315 --rc genhtml_legend=1 00:07:00.315 --rc geninfo_all_blocks=1 00:07:00.315 --rc geninfo_unexecuted_blocks=1 00:07:00.315 00:07:00.315 ' 00:07:00.315 17:34:00 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:00.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.315 --rc genhtml_branch_coverage=1 00:07:00.315 --rc genhtml_function_coverage=1 00:07:00.315 --rc genhtml_legend=1 00:07:00.315 --rc geninfo_all_blocks=1 00:07:00.315 --rc geninfo_unexecuted_blocks=1 00:07:00.315 00:07:00.315 ' 00:07:00.315 17:34:00 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:00.315 17:34:00 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:00.315 17:34:00 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:00.315 17:34:00 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:00.315 17:34:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.315 17:34:00 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.315 ************************************ 00:07:00.315 START TEST event_perf 00:07:00.315 ************************************ 00:07:00.315 17:34:00 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:00.315 Running I/O for 1 seconds...[2024-11-20 17:34:00.091101] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:00.315 [2024-11-20 17:34:00.091269] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2425492 ] 00:07:00.315 [2024-11-20 17:34:00.176901] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.315 [2024-11-20 17:34:00.217536] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.315 [2024-11-20 17:34:00.217693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.315 [2024-11-20 17:34:00.217841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.315 Running I/O for 1 seconds...[2024-11-20 17:34:00.217842] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.702 00:07:01.702 lcore 0: 183690 00:07:01.702 lcore 1: 183693 00:07:01.702 lcore 2: 183693 00:07:01.702 lcore 3: 183693 00:07:01.702 done. 00:07:01.702 00:07:01.702 real 0m1.184s 00:07:01.702 user 0m4.076s 00:07:01.702 sys 0m0.105s 00:07:01.702 17:34:01 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.702 17:34:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.702 ************************************ 00:07:01.702 END TEST event_perf 00:07:01.702 ************************************ 00:07:01.702 17:34:01 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:01.702 17:34:01 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:01.702 17:34:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.702 17:34:01 event -- common/autotest_common.sh@10 -- # set +x 00:07:01.702 ************************************ 00:07:01.702 START TEST event_reactor 00:07:01.702 ************************************ 00:07:01.702 17:34:01 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:01.702 [2024-11-20 17:34:01.348872] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:01.702 [2024-11-20 17:34:01.348962] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2425719 ] 00:07:01.702 [2024-11-20 17:34:01.428997] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.702 [2024-11-20 17:34:01.457137] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.642 test_start 00:07:02.642 oneshot 00:07:02.642 tick 100 00:07:02.642 tick 100 00:07:02.642 tick 250 00:07:02.642 tick 100 00:07:02.642 tick 100 00:07:02.642 tick 100 00:07:02.642 tick 250 00:07:02.642 tick 500 00:07:02.642 tick 100 00:07:02.642 tick 100 00:07:02.642 tick 250 00:07:02.642 tick 100 00:07:02.642 tick 100 00:07:02.642 test_end 00:07:02.642 00:07:02.642 real 0m1.163s 00:07:02.642 user 0m1.073s 00:07:02.642 sys 0m0.086s 00:07:02.642 17:34:02 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.642 17:34:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:02.642 ************************************ 00:07:02.642 END TEST event_reactor 00:07:02.642 ************************************ 00:07:02.642 17:34:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:02.642 17:34:02 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:02.642 17:34:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.642 17:34:02 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.903 ************************************ 00:07:02.903 START TEST event_reactor_perf 00:07:02.903 ************************************ 00:07:02.903 17:34:02 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:02.903 [2024-11-20 17:34:02.592294] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:02.903 [2024-11-20 17:34:02.592379] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426073 ] 00:07:02.903 [2024-11-20 17:34:02.671063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.903 [2024-11-20 17:34:02.699863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.846 test_start 00:07:03.846 test_end 00:07:03.846 Performance: 541457 events per second 00:07:03.846 00:07:03.846 real 0m1.165s 00:07:03.846 user 0m1.072s 00:07:03.846 sys 0m0.089s 00:07:03.846 17:34:03 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.846 17:34:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.846 ************************************ 00:07:03.846 END TEST event_reactor_perf 00:07:03.846 ************************************ 00:07:04.107 17:34:03 event -- event/event.sh@49 -- # uname -s 00:07:04.107 17:34:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:04.107 17:34:03 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:04.107 17:34:03 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.107 17:34:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.107 17:34:03 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.107 ************************************ 00:07:04.107 START TEST event_scheduler 00:07:04.107 ************************************ 00:07:04.107 17:34:03 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:04.107 * Looking for test storage... 00:07:04.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:04.107 17:34:03 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:04.107 17:34:03 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:07:04.107 17:34:03 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:04.107 17:34:03 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:04.107 17:34:03 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.107 17:34:03 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.107 17:34:03 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.107 17:34:03 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.107 17:34:04 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.107 17:34:04 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.107 17:34:04 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.107 17:34:04 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.107 17:34:04 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.107 17:34:04 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.107 17:34:04 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.107 17:34:04 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:04.107 17:34:04 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:04.107 17:34:04 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.107 17:34:04 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.108 17:34:04 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:04.108 17:34:04 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:04.108 17:34:04 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.108 17:34:04 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:04.108 17:34:04 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.108 17:34:04 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:04.108 17:34:04 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:04.108 17:34:04 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.108 17:34:04 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:04.108 17:34:04 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.108 17:34:04 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.108 17:34:04 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.108 17:34:04 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:04.108 17:34:04 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.108 17:34:04 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:04.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.108 --rc genhtml_branch_coverage=1 00:07:04.108 --rc genhtml_function_coverage=1 00:07:04.108 --rc genhtml_legend=1 00:07:04.108 --rc geninfo_all_blocks=1 00:07:04.108 --rc geninfo_unexecuted_blocks=1 00:07:04.108 00:07:04.108 ' 00:07:04.108 17:34:04 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:04.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.108 --rc genhtml_branch_coverage=1 00:07:04.108 --rc genhtml_function_coverage=1 00:07:04.108 --rc genhtml_legend=1 00:07:04.108 --rc geninfo_all_blocks=1 00:07:04.108 --rc geninfo_unexecuted_blocks=1 00:07:04.108 00:07:04.108 ' 00:07:04.108 17:34:04 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:04.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.108 --rc genhtml_branch_coverage=1 00:07:04.108 --rc genhtml_function_coverage=1 00:07:04.108 --rc genhtml_legend=1 00:07:04.108 --rc geninfo_all_blocks=1 00:07:04.108 --rc geninfo_unexecuted_blocks=1 00:07:04.108 00:07:04.108 ' 00:07:04.108 17:34:04 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:04.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.108 --rc genhtml_branch_coverage=1 00:07:04.108 --rc genhtml_function_coverage=1 00:07:04.108 --rc genhtml_legend=1 00:07:04.108 --rc geninfo_all_blocks=1 00:07:04.108 --rc geninfo_unexecuted_blocks=1 00:07:04.108 00:07:04.108 ' 00:07:04.108 17:34:04 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:04.108 17:34:04 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2426457 00:07:04.108 17:34:04 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.108 17:34:04 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:04.108 17:34:04 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2426457 00:07:04.108 17:34:04 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2426457 ']' 00:07:04.108 17:34:04 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.108 17:34:04 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.108 17:34:04 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.108 17:34:04 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.108 17:34:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:04.370 [2024-11-20 17:34:04.068336] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:04.370 [2024-11-20 17:34:04.068404] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426457 ] 00:07:04.370 [2024-11-20 17:34:04.150284] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.370 [2024-11-20 17:34:04.204590] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.370 [2024-11-20 17:34:04.204754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.370 [2024-11-20 17:34:04.204897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.370 [2024-11-20 17:34:04.204897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.313 17:34:04 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.313 17:34:04 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:05.313 17:34:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:05.313 17:34:04 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.313 17:34:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.313 [2024-11-20 17:34:04.891437] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:05.313 [2024-11-20 17:34:04.891458] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:05.313 [2024-11-20 17:34:04.891468] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:05.313 [2024-11-20 17:34:04.891474] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:05.313 [2024-11-20 17:34:04.891479] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:05.313 17:34:04 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.313 17:34:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:05.313 17:34:04 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.313 17:34:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.313 [2024-11-20 17:34:04.950673] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:05.313 17:34:04 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.313 17:34:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:05.313 17:34:04 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.313 17:34:04 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.313 17:34:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.313 ************************************ 00:07:05.313 START TEST scheduler_create_thread 00:07:05.313 ************************************ 00:07:05.313 17:34:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:05.313 17:34:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:05.313 17:34:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.313 17:34:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.313 2 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.313 3 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.313 4 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.313 5 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.313 6 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.313 7 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.313 8 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.313 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.885 9 00:07:05.885 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.885 17:34:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:05.885 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.885 17:34:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.878 10 00:07:06.878 17:34:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.878 17:34:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:06.878 17:34:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.878 17:34:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.825 17:34:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.825 17:34:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:07.825 17:34:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:07.825 17:34:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.825 17:34:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.396 17:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.396 17:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:08.396 17:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.396 17:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.340 17:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.340 17:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:09.340 17:34:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:09.340 17:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.340 17:34:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.912 17:34:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.912 00:07:09.912 real 0m4.567s 00:07:09.912 user 0m0.027s 00:07:09.912 sys 0m0.005s 00:07:09.912 17:34:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.912 17:34:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.912 ************************************ 00:07:09.912 END TEST scheduler_create_thread 00:07:09.912 ************************************ 00:07:09.912 17:34:09 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:09.912 17:34:09 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2426457 00:07:09.912 17:34:09 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2426457 ']' 00:07:09.912 17:34:09 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2426457 00:07:09.912 17:34:09 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:09.912 17:34:09 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.912 17:34:09 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2426457 00:07:09.912 17:34:09 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:09.912 17:34:09 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:09.912 17:34:09 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2426457' 00:07:09.912 killing process with pid 2426457 00:07:09.912 17:34:09 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2426457 00:07:09.912 17:34:09 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2426457 00:07:09.912 [2024-11-20 17:34:09.737472] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:10.172 00:07:10.172 real 0m6.100s 00:07:10.172 user 0m15.139s 00:07:10.172 sys 0m0.446s 00:07:10.172 17:34:09 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.172 17:34:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.172 ************************************ 00:07:10.172 END TEST event_scheduler 00:07:10.172 ************************************ 00:07:10.172 17:34:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:10.172 17:34:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:10.172 17:34:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.172 17:34:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.172 17:34:09 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.172 ************************************ 00:07:10.172 START TEST app_repeat 00:07:10.172 ************************************ 00:07:10.172 17:34:10 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:10.172 17:34:10 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.172 17:34:10 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.172 17:34:10 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:10.172 17:34:10 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.172 17:34:10 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:10.172 17:34:10 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:10.172 17:34:10 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:10.172 17:34:10 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2427560 00:07:10.172 17:34:10 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:10.172 17:34:10 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:10.172 17:34:10 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2427560' 00:07:10.172 Process app_repeat pid: 2427560 00:07:10.172 17:34:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:10.172 17:34:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:10.172 spdk_app_start Round 0 00:07:10.172 17:34:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2427560 /var/tmp/spdk-nbd.sock 00:07:10.172 17:34:10 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2427560 ']' 00:07:10.172 17:34:10 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.172 17:34:10 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.172 17:34:10 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.172 17:34:10 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.172 17:34:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.172 [2024-11-20 17:34:10.047110] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:10.172 [2024-11-20 17:34:10.047203] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427560 ] 00:07:10.433 [2024-11-20 17:34:10.124618] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.433 [2024-11-20 17:34:10.155068] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.433 [2024-11-20 17:34:10.155069] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.433 17:34:10 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.433 17:34:10 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:10.433 17:34:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.693 Malloc0 00:07:10.693 17:34:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.693 Malloc1 00:07:10.693 17:34:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:10.693 17:34:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.693 17:34:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.693 17:34:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:10.693 17:34:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.693 17:34:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:10.693 17:34:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:10.693 17:34:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.693 17:34:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.693 17:34:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:10.693 17:34:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.693 17:34:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:10.693 17:34:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:10.693 17:34:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:10.693 17:34:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:10.693 17:34:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:10.955 /dev/nbd0 00:07:10.955 17:34:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:10.955 17:34:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:10.955 17:34:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:10.955 17:34:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:10.955 17:34:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.955 17:34:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.955 17:34:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:10.955 17:34:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:10.955 17:34:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.955 17:34:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.955 17:34:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:10.955 1+0 records in 00:07:10.955 1+0 records out 00:07:10.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271411 s, 15.1 MB/s 00:07:10.955 17:34:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:10.955 17:34:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:10.955 17:34:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:10.955 17:34:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.955 17:34:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:10.955 17:34:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.955 17:34:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:10.955 17:34:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:11.216 /dev/nbd1 00:07:11.216 17:34:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:11.216 17:34:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:11.216 17:34:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:11.216 17:34:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:11.216 17:34:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:11.216 17:34:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:11.216 17:34:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:11.216 17:34:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:11.216 17:34:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:11.216 17:34:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:11.216 17:34:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.216 1+0 records in 00:07:11.216 1+0 records out 00:07:11.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310309 s, 13.2 MB/s 00:07:11.217 17:34:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:11.217 17:34:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:11.217 17:34:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:11.217 17:34:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:11.217 17:34:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:11.217 17:34:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.217 17:34:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.217 17:34:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.217 17:34:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.217 17:34:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.477 17:34:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:11.477 { 00:07:11.477 "nbd_device": "/dev/nbd0", 00:07:11.477 "bdev_name": "Malloc0" 00:07:11.477 }, 00:07:11.477 { 00:07:11.478 "nbd_device": "/dev/nbd1", 00:07:11.478 "bdev_name": "Malloc1" 00:07:11.478 } 00:07:11.478 ]' 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:11.478 { 00:07:11.478 "nbd_device": "/dev/nbd0", 00:07:11.478 "bdev_name": "Malloc0" 00:07:11.478 }, 00:07:11.478 { 00:07:11.478 "nbd_device": "/dev/nbd1", 00:07:11.478 "bdev_name": "Malloc1" 00:07:11.478 } 00:07:11.478 ]' 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:11.478 /dev/nbd1' 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:11.478 /dev/nbd1' 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:11.478 256+0 records in 00:07:11.478 256+0 records out 00:07:11.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012729 s, 82.4 MB/s 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:11.478 256+0 records in 00:07:11.478 256+0 records out 00:07:11.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121039 s, 86.6 MB/s 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:11.478 256+0 records in 00:07:11.478 256+0 records out 00:07:11.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130323 s, 80.5 MB/s 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.478 17:34:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:11.739 17:34:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:11.739 17:34:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:11.739 17:34:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:11.739 17:34:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.739 17:34:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.739 17:34:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:11.739 17:34:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:11.739 17:34:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.739 17:34:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.739 17:34:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:11.999 17:34:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:11.999 17:34:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:11.999 17:34:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:11.999 17:34:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.999 17:34:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.999 17:34:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:11.999 17:34:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:11.999 17:34:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.999 17:34:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.999 17:34:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.999 17:34:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.999 17:34:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.999 17:34:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.999 17:34:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.259 17:34:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:12.259 17:34:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:12.259 17:34:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.259 17:34:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:12.259 17:34:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:12.259 17:34:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:12.259 17:34:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:12.259 17:34:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:12.259 17:34:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:12.259 17:34:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:12.259 17:34:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:12.519 [2024-11-20 17:34:12.234264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:12.519 [2024-11-20 17:34:12.261206] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.519 [2024-11-20 17:34:12.261222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.519 [2024-11-20 17:34:12.290048] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:12.519 [2024-11-20 17:34:12.290079] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:15.816 17:34:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:15.816 17:34:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:15.816 spdk_app_start Round 1 00:07:15.816 17:34:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2427560 /var/tmp/spdk-nbd.sock 00:07:15.816 17:34:15 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2427560 ']' 00:07:15.816 17:34:15 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:15.816 17:34:15 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.816 17:34:15 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:15.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:15.816 17:34:15 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.816 17:34:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:15.816 17:34:15 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.816 17:34:15 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:15.816 17:34:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.816 Malloc0 00:07:15.816 17:34:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.816 Malloc1 00:07:15.816 17:34:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.816 17:34:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.816 17:34:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.816 17:34:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:15.816 17:34:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.816 17:34:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:15.816 17:34:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.816 17:34:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.816 17:34:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.816 17:34:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:15.816 17:34:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.816 17:34:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:15.816 17:34:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:15.816 17:34:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:15.816 17:34:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.816 17:34:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:16.077 /dev/nbd0 00:07:16.077 17:34:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:16.077 17:34:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:16.077 17:34:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:16.077 17:34:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:16.077 17:34:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:16.077 17:34:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:16.077 17:34:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:16.077 17:34:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:16.077 17:34:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:16.077 17:34:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:16.077 17:34:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:16.077 1+0 records in 00:07:16.077 1+0 records out 00:07:16.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289774 s, 14.1 MB/s 00:07:16.077 17:34:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:16.077 17:34:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:16.077 17:34:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:16.077 17:34:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:16.077 17:34:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:16.077 17:34:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.077 17:34:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.077 17:34:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:16.338 /dev/nbd1 00:07:16.338 17:34:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:16.338 17:34:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:16.338 17:34:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:16.338 17:34:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:16.338 17:34:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:16.338 17:34:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:16.338 17:34:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:16.338 17:34:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:16.338 17:34:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:16.338 17:34:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:16.338 17:34:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:16.338 1+0 records in 00:07:16.338 1+0 records out 00:07:16.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344292 s, 11.9 MB/s 00:07:16.338 17:34:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:16.338 17:34:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:16.338 17:34:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:16.338 17:34:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:16.338 17:34:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:16.338 17:34:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.338 17:34:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.338 17:34:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.338 17:34:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.338 17:34:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:16.599 { 00:07:16.599 "nbd_device": "/dev/nbd0", 00:07:16.599 "bdev_name": "Malloc0" 00:07:16.599 }, 00:07:16.599 { 00:07:16.599 "nbd_device": "/dev/nbd1", 00:07:16.599 "bdev_name": "Malloc1" 00:07:16.599 } 00:07:16.599 ]' 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:16.599 { 00:07:16.599 "nbd_device": "/dev/nbd0", 00:07:16.599 "bdev_name": "Malloc0" 00:07:16.599 }, 00:07:16.599 { 00:07:16.599 "nbd_device": "/dev/nbd1", 00:07:16.599 "bdev_name": "Malloc1" 00:07:16.599 } 00:07:16.599 ]' 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:16.599 /dev/nbd1' 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:16.599 /dev/nbd1' 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:16.599 256+0 records in 00:07:16.599 256+0 records out 00:07:16.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122118 s, 85.9 MB/s 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:16.599 256+0 records in 00:07:16.599 256+0 records out 00:07:16.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126565 s, 82.8 MB/s 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:16.599 256+0 records in 00:07:16.599 256+0 records out 00:07:16.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133178 s, 78.7 MB/s 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.599 17:34:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.860 17:34:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.860 17:34:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.860 17:34:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.860 17:34:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.860 17:34:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.860 17:34:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.860 17:34:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:16.860 17:34:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.860 17:34:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.860 17:34:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:17.120 17:34:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:17.120 17:34:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:17.120 17:34:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:17.120 17:34:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.120 17:34:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.120 17:34:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:17.120 17:34:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:17.120 17:34:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.120 17:34:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.120 17:34:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.120 17:34:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.381 17:34:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.381 17:34:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.381 17:34:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.381 17:34:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.381 17:34:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.381 17:34:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.381 17:34:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:17.381 17:34:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.381 17:34:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.381 17:34:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:17.381 17:34:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:17.381 17:34:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:17.381 17:34:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:17.644 17:34:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:17.644 [2024-11-20 17:34:17.418051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.644 [2024-11-20 17:34:17.444903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.644 [2024-11-20 17:34:17.444904] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.644 [2024-11-20 17:34:17.474216] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:17.644 [2024-11-20 17:34:17.474247] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:20.943 17:34:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:20.943 17:34:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:20.943 spdk_app_start Round 2 00:07:20.943 17:34:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2427560 /var/tmp/spdk-nbd.sock 00:07:20.943 17:34:20 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2427560 ']' 00:07:20.943 17:34:20 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:20.943 17:34:20 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.943 17:34:20 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:20.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:20.943 17:34:20 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.943 17:34:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:20.943 17:34:20 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.943 17:34:20 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:20.943 17:34:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:20.943 Malloc0 00:07:20.943 17:34:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:20.943 Malloc1 00:07:20.943 17:34:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:20.943 17:34:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.943 17:34:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:20.943 17:34:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:20.943 17:34:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.943 17:34:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:20.943 17:34:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:20.943 17:34:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.943 17:34:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:20.943 17:34:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:20.943 17:34:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.943 17:34:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:20.943 17:34:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:20.943 17:34:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:20.943 17:34:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:20.943 17:34:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:21.204 /dev/nbd0 00:07:21.204 17:34:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:21.204 17:34:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:21.204 17:34:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:21.204 17:34:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:21.204 17:34:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:21.204 17:34:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:21.204 17:34:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:21.204 17:34:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:21.204 17:34:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:21.204 17:34:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:21.204 17:34:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:21.204 1+0 records in 00:07:21.204 1+0 records out 00:07:21.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272639 s, 15.0 MB/s 00:07:21.204 17:34:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:21.204 17:34:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:21.204 17:34:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:21.204 17:34:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:21.204 17:34:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:21.204 17:34:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.204 17:34:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:21.204 17:34:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:21.465 /dev/nbd1 00:07:21.465 17:34:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:21.465 17:34:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:21.465 17:34:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:21.465 17:34:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:21.465 17:34:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:21.465 17:34:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:21.465 17:34:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:21.465 17:34:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:21.465 17:34:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:21.465 17:34:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:21.465 17:34:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:21.465 1+0 records in 00:07:21.465 1+0 records out 00:07:21.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279604 s, 14.6 MB/s 00:07:21.465 17:34:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:21.465 17:34:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:21.465 17:34:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:21.465 17:34:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:21.465 17:34:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:21.465 17:34:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.465 17:34:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:21.465 17:34:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.465 17:34:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.465 17:34:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:21.727 { 00:07:21.727 "nbd_device": "/dev/nbd0", 00:07:21.727 "bdev_name": "Malloc0" 00:07:21.727 }, 00:07:21.727 { 00:07:21.727 "nbd_device": "/dev/nbd1", 00:07:21.727 "bdev_name": "Malloc1" 00:07:21.727 } 00:07:21.727 ]' 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:21.727 { 00:07:21.727 "nbd_device": "/dev/nbd0", 00:07:21.727 "bdev_name": "Malloc0" 00:07:21.727 }, 00:07:21.727 { 00:07:21.727 "nbd_device": "/dev/nbd1", 00:07:21.727 "bdev_name": "Malloc1" 00:07:21.727 } 00:07:21.727 ]' 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:21.727 /dev/nbd1' 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:21.727 /dev/nbd1' 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:21.727 256+0 records in 00:07:21.727 256+0 records out 00:07:21.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118472 s, 88.5 MB/s 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:21.727 256+0 records in 00:07:21.727 256+0 records out 00:07:21.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122653 s, 85.5 MB/s 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:21.727 256+0 records in 00:07:21.727 256+0 records out 00:07:21.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135497 s, 77.4 MB/s 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.727 17:34:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:21.988 17:34:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:21.988 17:34:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:21.988 17:34:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:21.988 17:34:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.988 17:34:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.988 17:34:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:21.988 17:34:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:21.988 17:34:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.988 17:34:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.988 17:34:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:22.248 17:34:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:22.248 17:34:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:22.248 17:34:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:22.248 17:34:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.248 17:34:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.248 17:34:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:22.248 17:34:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:22.248 17:34:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.248 17:34:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.248 17:34:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.248 17:34:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.509 17:34:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:22.509 17:34:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:22.509 17:34:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.509 17:34:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:22.509 17:34:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:22.509 17:34:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.509 17:34:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:22.509 17:34:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:22.509 17:34:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:22.509 17:34:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:22.509 17:34:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:22.509 17:34:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:22.509 17:34:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:22.770 17:34:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:22.770 [2024-11-20 17:34:22.525566] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.770 [2024-11-20 17:34:22.552908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.770 [2024-11-20 17:34:22.552909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.770 [2024-11-20 17:34:22.581720] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:22.770 [2024-11-20 17:34:22.581750] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:26.236 17:34:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2427560 /var/tmp/spdk-nbd.sock 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2427560 ']' 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:26.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:26.236 17:34:25 event.app_repeat -- event/event.sh@39 -- # killprocess 2427560 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2427560 ']' 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2427560 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2427560 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2427560' 00:07:26.236 killing process with pid 2427560 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2427560 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2427560 00:07:26.236 spdk_app_start is called in Round 0. 00:07:26.236 Shutdown signal received, stop current app iteration 00:07:26.236 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:07:26.236 spdk_app_start is called in Round 1. 00:07:26.236 Shutdown signal received, stop current app iteration 00:07:26.236 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:07:26.236 spdk_app_start is called in Round 2. 00:07:26.236 Shutdown signal received, stop current app iteration 00:07:26.236 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:07:26.236 spdk_app_start is called in Round 3. 00:07:26.236 Shutdown signal received, stop current app iteration 00:07:26.236 17:34:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:26.236 17:34:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:26.236 00:07:26.236 real 0m15.779s 00:07:26.236 user 0m34.645s 00:07:26.236 sys 0m2.267s 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.236 17:34:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:26.236 ************************************ 00:07:26.236 END TEST app_repeat 00:07:26.236 ************************************ 00:07:26.236 17:34:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:26.236 17:34:25 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:26.236 17:34:25 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.236 17:34:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.236 17:34:25 event -- common/autotest_common.sh@10 -- # set +x 00:07:26.236 ************************************ 00:07:26.236 START TEST cpu_locks 00:07:26.236 ************************************ 00:07:26.236 17:34:25 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:26.236 * Looking for test storage... 00:07:26.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:26.236 17:34:25 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:26.236 17:34:25 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:26.236 17:34:25 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:26.236 17:34:26 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:26.236 17:34:26 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.236 17:34:26 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.236 17:34:26 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.236 17:34:26 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.236 17:34:26 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.236 17:34:26 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.236 17:34:26 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.236 17:34:26 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.237 17:34:26 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:26.237 17:34:26 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.237 17:34:26 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:26.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.237 --rc genhtml_branch_coverage=1 00:07:26.237 --rc genhtml_function_coverage=1 00:07:26.237 --rc genhtml_legend=1 00:07:26.237 --rc geninfo_all_blocks=1 00:07:26.237 --rc geninfo_unexecuted_blocks=1 00:07:26.237 00:07:26.237 ' 00:07:26.237 17:34:26 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:26.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.237 --rc genhtml_branch_coverage=1 00:07:26.237 --rc genhtml_function_coverage=1 00:07:26.237 --rc genhtml_legend=1 00:07:26.237 --rc geninfo_all_blocks=1 00:07:26.237 --rc geninfo_unexecuted_blocks=1 00:07:26.237 00:07:26.237 ' 00:07:26.237 17:34:26 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:26.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.237 --rc genhtml_branch_coverage=1 00:07:26.237 --rc genhtml_function_coverage=1 00:07:26.237 --rc genhtml_legend=1 00:07:26.237 --rc geninfo_all_blocks=1 00:07:26.237 --rc geninfo_unexecuted_blocks=1 00:07:26.237 00:07:26.237 ' 00:07:26.237 17:34:26 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:26.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.237 --rc genhtml_branch_coverage=1 00:07:26.237 --rc genhtml_function_coverage=1 00:07:26.237 --rc genhtml_legend=1 00:07:26.237 --rc geninfo_all_blocks=1 00:07:26.237 --rc geninfo_unexecuted_blocks=1 00:07:26.237 00:07:26.237 ' 00:07:26.237 17:34:26 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:26.237 17:34:26 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:26.237 17:34:26 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:26.237 17:34:26 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:26.237 17:34:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.237 17:34:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.237 17:34:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.237 ************************************ 00:07:26.237 START TEST default_locks 00:07:26.237 ************************************ 00:07:26.237 17:34:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:26.237 17:34:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2431120 00:07:26.237 17:34:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2431120 00:07:26.237 17:34:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:26.237 17:34:26 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2431120 ']' 00:07:26.237 17:34:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.237 17:34:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.237 17:34:26 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.237 17:34:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.237 17:34:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.498 [2024-11-20 17:34:26.163111] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:26.498 [2024-11-20 17:34:26.163190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431120 ] 00:07:26.498 [2024-11-20 17:34:26.239736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.498 [2024-11-20 17:34:26.274093] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.068 17:34:26 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.068 17:34:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:27.068 17:34:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2431120 00:07:27.068 17:34:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2431120 00:07:27.068 17:34:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:27.638 lslocks: write error 00:07:27.638 17:34:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2431120 00:07:27.639 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2431120 ']' 00:07:27.639 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2431120 00:07:27.639 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:27.639 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.639 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2431120 00:07:27.639 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.639 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.639 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2431120' 00:07:27.639 killing process with pid 2431120 00:07:27.639 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2431120 00:07:27.639 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2431120 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2431120 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2431120 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2431120 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2431120 ']' 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2431120) - No such process 00:07:27.899 ERROR: process (pid: 2431120) is no longer running 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:27.899 00:07:27.899 real 0m1.579s 00:07:27.899 user 0m1.703s 00:07:27.899 sys 0m0.561s 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.899 17:34:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.899 ************************************ 00:07:27.899 END TEST default_locks 00:07:27.899 ************************************ 00:07:27.899 17:34:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:27.899 17:34:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.899 17:34:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.899 17:34:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.899 ************************************ 00:07:27.899 START TEST default_locks_via_rpc 00:07:27.899 ************************************ 00:07:27.899 17:34:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:27.900 17:34:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2431490 00:07:27.900 17:34:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2431490 00:07:27.900 17:34:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:27.900 17:34:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2431490 ']' 00:07:27.900 17:34:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.900 17:34:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.900 17:34:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.900 17:34:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.900 17:34:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.160 [2024-11-20 17:34:27.815387] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:28.160 [2024-11-20 17:34:27.815437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431490 ] 00:07:28.160 [2024-11-20 17:34:27.892041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.160 [2024-11-20 17:34:27.920807] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.732 17:34:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.732 17:34:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:28.732 17:34:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:28.732 17:34:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.732 17:34:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.732 17:34:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.732 17:34:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:28.732 17:34:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:28.732 17:34:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:28.732 17:34:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:28.732 17:34:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:28.732 17:34:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.732 17:34:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.732 17:34:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.732 17:34:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2431490 00:07:28.732 17:34:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2431490 00:07:28.732 17:34:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:29.302 17:34:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2431490 00:07:29.302 17:34:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2431490 ']' 00:07:29.302 17:34:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2431490 00:07:29.302 17:34:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:29.302 17:34:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.302 17:34:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2431490 00:07:29.562 17:34:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.562 17:34:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.562 17:34:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2431490' 00:07:29.562 killing process with pid 2431490 00:07:29.562 17:34:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2431490 00:07:29.562 17:34:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2431490 00:07:29.562 00:07:29.562 real 0m1.662s 00:07:29.562 user 0m1.783s 00:07:29.562 sys 0m0.568s 00:07:29.562 17:34:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.562 17:34:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.562 ************************************ 00:07:29.562 END TEST default_locks_via_rpc 00:07:29.562 ************************************ 00:07:29.562 17:34:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:29.562 17:34:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.562 17:34:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.562 17:34:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.823 ************************************ 00:07:29.823 START TEST non_locking_app_on_locked_coremask 00:07:29.823 ************************************ 00:07:29.823 17:34:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:29.823 17:34:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2431857 00:07:29.823 17:34:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2431857 /var/tmp/spdk.sock 00:07:29.823 17:34:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:29.823 17:34:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2431857 ']' 00:07:29.823 17:34:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.823 17:34:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.823 17:34:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.823 17:34:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.823 17:34:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.823 [2024-11-20 17:34:29.548448] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:29.823 [2024-11-20 17:34:29.548497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431857 ] 00:07:29.823 [2024-11-20 17:34:29.622894] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.823 [2024-11-20 17:34:29.652992] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.764 17:34:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.764 17:34:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:30.764 17:34:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2431930 00:07:30.764 17:34:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2431930 /var/tmp/spdk2.sock 00:07:30.764 17:34:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2431930 ']' 00:07:30.764 17:34:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:30.764 17:34:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:30.764 17:34:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.764 17:34:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:30.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:30.764 17:34:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.764 17:34:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.764 [2024-11-20 17:34:30.388915] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:30.764 [2024-11-20 17:34:30.388973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431930 ] 00:07:30.764 [2024-11-20 17:34:30.460736] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:30.764 [2024-11-20 17:34:30.460759] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.764 [2024-11-20 17:34:30.517725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.334 17:34:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.334 17:34:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:31.334 17:34:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2431857 00:07:31.334 17:34:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2431857 00:07:31.334 17:34:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:31.905 lslocks: write error 00:07:31.905 17:34:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2431857 00:07:31.905 17:34:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2431857 ']' 00:07:31.905 17:34:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2431857 00:07:31.905 17:34:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:31.905 17:34:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.905 17:34:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2431857 00:07:31.905 17:34:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.905 17:34:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.905 17:34:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2431857' 00:07:31.905 killing process with pid 2431857 00:07:31.905 17:34:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2431857 00:07:31.905 17:34:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2431857 00:07:32.475 17:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2431930 00:07:32.475 17:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2431930 ']' 00:07:32.475 17:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2431930 00:07:32.475 17:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:32.475 17:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.475 17:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2431930 00:07:32.475 17:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:32.475 17:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:32.475 17:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2431930' 00:07:32.475 killing process with pid 2431930 00:07:32.475 17:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2431930 00:07:32.475 17:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2431930 00:07:32.736 00:07:32.736 real 0m2.908s 00:07:32.736 user 0m3.205s 00:07:32.736 sys 0m0.920s 00:07:32.736 17:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.736 17:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.736 ************************************ 00:07:32.736 END TEST non_locking_app_on_locked_coremask 00:07:32.736 ************************************ 00:07:32.736 17:34:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:32.736 17:34:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.736 17:34:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.736 17:34:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.736 ************************************ 00:07:32.736 START TEST locking_app_on_unlocked_coremask 00:07:32.736 ************************************ 00:07:32.736 17:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:32.736 17:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2432542 00:07:32.736 17:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2432542 /var/tmp/spdk.sock 00:07:32.736 17:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:32.736 17:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2432542 ']' 00:07:32.736 17:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.736 17:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.736 17:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.736 17:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.736 17:34:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.736 [2024-11-20 17:34:32.533879] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:32.736 [2024-11-20 17:34:32.533938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432542 ] 00:07:32.736 [2024-11-20 17:34:32.610382] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:32.736 [2024-11-20 17:34:32.610408] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.736 [2024-11-20 17:34:32.641640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.677 17:34:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.678 17:34:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:33.678 17:34:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:33.678 17:34:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2432579 00:07:33.678 17:34:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2432579 /var/tmp/spdk2.sock 00:07:33.678 17:34:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2432579 ']' 00:07:33.678 17:34:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.678 17:34:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.678 17:34:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.678 17:34:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.678 17:34:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.678 [2024-11-20 17:34:33.353645] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:33.678 [2024-11-20 17:34:33.353697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432579 ] 00:07:33.678 [2024-11-20 17:34:33.424803] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.678 [2024-11-20 17:34:33.481477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.250 17:34:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.250 17:34:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:34.250 17:34:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2432579 00:07:34.250 17:34:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2432579 00:07:34.250 17:34:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:35.633 lslocks: write error 00:07:35.633 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2432542 00:07:35.633 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2432542 ']' 00:07:35.633 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2432542 00:07:35.633 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:35.633 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.633 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2432542 00:07:35.633 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.633 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.633 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2432542' 00:07:35.633 killing process with pid 2432542 00:07:35.633 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2432542 00:07:35.633 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2432542 00:07:35.893 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2432579 00:07:35.893 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2432579 ']' 00:07:35.893 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2432579 00:07:35.893 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:35.893 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.893 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2432579 00:07:35.893 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.893 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.893 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2432579' 00:07:35.893 killing process with pid 2432579 00:07:35.893 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2432579 00:07:35.893 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2432579 00:07:36.154 00:07:36.154 real 0m3.482s 00:07:36.154 user 0m3.834s 00:07:36.154 sys 0m1.127s 00:07:36.154 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.154 17:34:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.154 ************************************ 00:07:36.154 END TEST locking_app_on_unlocked_coremask 00:07:36.154 ************************************ 00:07:36.154 17:34:35 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:36.154 17:34:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.154 17:34:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.154 17:34:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.154 ************************************ 00:07:36.154 START TEST locking_app_on_locked_coremask 00:07:36.154 ************************************ 00:07:36.154 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:36.154 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2433283 00:07:36.154 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2433283 /var/tmp/spdk.sock 00:07:36.154 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:36.154 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2433283 ']' 00:07:36.154 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.154 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.154 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.154 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.154 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.415 [2024-11-20 17:34:36.091283] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:36.415 [2024-11-20 17:34:36.091338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433283 ] 00:07:36.415 [2024-11-20 17:34:36.166267] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.415 [2024-11-20 17:34:36.197481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2433295 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2433295 /var/tmp/spdk2.sock 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2433295 /var/tmp/spdk2.sock 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2433295 /var/tmp/spdk2.sock 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2433295 ']' 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:36.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.985 17:34:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.245 [2024-11-20 17:34:36.935081] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:37.245 [2024-11-20 17:34:36.935134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433295 ] 00:07:37.245 [2024-11-20 17:34:37.006708] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2433283 has claimed it. 00:07:37.245 [2024-11-20 17:34:37.006737] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:37.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2433295) - No such process 00:07:37.814 ERROR: process (pid: 2433295) is no longer running 00:07:37.814 17:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.814 17:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:37.814 17:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:37.814 17:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.814 17:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:37.814 17:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.814 17:34:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2433283 00:07:37.814 17:34:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2433283 00:07:37.814 17:34:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.073 lslocks: write error 00:07:38.074 17:34:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2433283 00:07:38.074 17:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2433283 ']' 00:07:38.074 17:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2433283 00:07:38.074 17:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:38.074 17:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.074 17:34:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2433283 00:07:38.333 17:34:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.334 17:34:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.334 17:34:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2433283' 00:07:38.334 killing process with pid 2433283 00:07:38.334 17:34:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2433283 00:07:38.334 17:34:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2433283 00:07:38.334 00:07:38.334 real 0m2.179s 00:07:38.334 user 0m2.484s 00:07:38.334 sys 0m0.584s 00:07:38.334 17:34:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.334 17:34:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.334 ************************************ 00:07:38.334 END TEST locking_app_on_locked_coremask 00:07:38.334 ************************************ 00:07:38.594 17:34:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:38.594 17:34:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.594 17:34:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.594 17:34:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.594 ************************************ 00:07:38.594 START TEST locking_overlapped_coremask 00:07:38.594 ************************************ 00:07:38.594 17:34:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:38.594 17:34:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2433656 00:07:38.594 17:34:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2433656 /var/tmp/spdk.sock 00:07:38.594 17:34:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:38.594 17:34:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2433656 ']' 00:07:38.594 17:34:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.594 17:34:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.594 17:34:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.594 17:34:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.594 17:34:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.594 [2024-11-20 17:34:38.355357] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:38.594 [2024-11-20 17:34:38.355405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433656 ] 00:07:38.594 [2024-11-20 17:34:38.431333] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.594 [2024-11-20 17:34:38.461453] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.594 [2024-11-20 17:34:38.461662] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.594 [2024-11-20 17:34:38.461663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2433921 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2433921 /var/tmp/spdk2.sock 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2433921 /var/tmp/spdk2.sock 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2433921 /var/tmp/spdk2.sock 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2433921 ']' 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:39.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.535 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.536 [2024-11-20 17:34:39.196804] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:39.536 [2024-11-20 17:34:39.196861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433921 ] 00:07:39.536 [2024-11-20 17:34:39.286622] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2433656 has claimed it. 00:07:39.536 [2024-11-20 17:34:39.286662] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:40.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2433921) - No such process 00:07:40.106 ERROR: process (pid: 2433921) is no longer running 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2433656 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2433656 ']' 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2433656 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2433656 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2433656' 00:07:40.106 killing process with pid 2433656 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2433656 00:07:40.106 17:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2433656 00:07:40.368 00:07:40.368 real 0m1.788s 00:07:40.368 user 0m5.153s 00:07:40.368 sys 0m0.402s 00:07:40.368 17:34:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.368 17:34:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.368 ************************************ 00:07:40.368 END TEST locking_overlapped_coremask 00:07:40.368 ************************************ 00:07:40.368 17:34:40 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:40.368 17:34:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.368 17:34:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.368 17:34:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.368 ************************************ 00:07:40.368 START TEST locking_overlapped_coremask_via_rpc 00:07:40.368 ************************************ 00:07:40.368 17:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:40.368 17:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2434036 00:07:40.368 17:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2434036 /var/tmp/spdk.sock 00:07:40.368 17:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:40.368 17:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2434036 ']' 00:07:40.368 17:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.368 17:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.368 17:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.368 17:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.368 17:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.368 [2024-11-20 17:34:40.209081] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:40.368 [2024-11-20 17:34:40.209134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434036 ] 00:07:40.630 [2024-11-20 17:34:40.289000] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:40.630 [2024-11-20 17:34:40.289034] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:40.630 [2024-11-20 17:34:40.331730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.630 [2024-11-20 17:34:40.331880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.630 [2024-11-20 17:34:40.331882] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.201 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.201 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:41.201 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2434365 00:07:41.201 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2434365 /var/tmp/spdk2.sock 00:07:41.201 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2434365 ']' 00:07:41.201 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:41.201 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:41.201 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.201 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:41.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:41.201 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.201 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.201 [2024-11-20 17:34:41.071839] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:41.201 [2024-11-20 17:34:41.071893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434365 ] 00:07:41.462 [2024-11-20 17:34:41.161008] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:41.462 [2024-11-20 17:34:41.161036] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.462 [2024-11-20 17:34:41.225106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.462 [2024-11-20 17:34:41.228282] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.462 [2024-11-20 17:34:41.228284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:42.031 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.031 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:42.031 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:42.031 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.031 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.031 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.031 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:42.031 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:42.031 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:42.031 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:42.031 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.031 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:42.031 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.031 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:42.031 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.031 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.031 [2024-11-20 17:34:41.888245] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2434036 has claimed it. 00:07:42.031 request: 00:07:42.031 { 00:07:42.031 "method": "framework_enable_cpumask_locks", 00:07:42.032 "req_id": 1 00:07:42.032 } 00:07:42.032 Got JSON-RPC error response 00:07:42.032 response: 00:07:42.032 { 00:07:42.032 "code": -32603, 00:07:42.032 "message": "Failed to claim CPU core: 2" 00:07:42.032 } 00:07:42.032 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:42.032 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:42.032 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.032 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.032 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.032 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2434036 /var/tmp/spdk.sock 00:07:42.032 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2434036 ']' 00:07:42.032 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.032 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.032 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.032 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.032 17:34:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.292 17:34:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.292 17:34:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:42.292 17:34:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2434365 /var/tmp/spdk2.sock 00:07:42.292 17:34:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2434365 ']' 00:07:42.292 17:34:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:42.292 17:34:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.292 17:34:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:42.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:42.292 17:34:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.292 17:34:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.553 17:34:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.553 17:34:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:42.553 17:34:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:42.553 17:34:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:42.553 17:34:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:42.553 17:34:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:42.553 00:07:42.553 real 0m2.111s 00:07:42.553 user 0m0.865s 00:07:42.553 sys 0m0.157s 00:07:42.553 17:34:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.553 17:34:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.553 ************************************ 00:07:42.553 END TEST locking_overlapped_coremask_via_rpc 00:07:42.553 ************************************ 00:07:42.553 17:34:42 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:42.553 17:34:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2434036 ]] 00:07:42.553 17:34:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2434036 00:07:42.553 17:34:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2434036 ']' 00:07:42.553 17:34:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2434036 00:07:42.553 17:34:42 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:42.553 17:34:42 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.553 17:34:42 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2434036 00:07:42.553 17:34:42 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.553 17:34:42 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.553 17:34:42 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2434036' 00:07:42.553 killing process with pid 2434036 00:07:42.553 17:34:42 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2434036 00:07:42.553 17:34:42 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2434036 00:07:42.814 17:34:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2434365 ]] 00:07:42.814 17:34:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2434365 00:07:42.814 17:34:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2434365 ']' 00:07:42.814 17:34:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2434365 00:07:42.814 17:34:42 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:42.814 17:34:42 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.814 17:34:42 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2434365 00:07:42.814 17:34:42 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:42.814 17:34:42 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:42.814 17:34:42 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2434365' 00:07:42.814 killing process with pid 2434365 00:07:42.814 17:34:42 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2434365 00:07:42.814 17:34:42 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2434365 00:07:43.074 17:34:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:43.074 17:34:42 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:43.074 17:34:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2434036 ]] 00:07:43.074 17:34:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2434036 00:07:43.074 17:34:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2434036 ']' 00:07:43.074 17:34:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2434036 00:07:43.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2434036) - No such process 00:07:43.074 17:34:42 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2434036 is not found' 00:07:43.074 Process with pid 2434036 is not found 00:07:43.074 17:34:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2434365 ]] 00:07:43.074 17:34:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2434365 00:07:43.074 17:34:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2434365 ']' 00:07:43.074 17:34:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2434365 00:07:43.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2434365) - No such process 00:07:43.074 17:34:42 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2434365 is not found' 00:07:43.074 Process with pid 2434365 is not found 00:07:43.074 17:34:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:43.074 00:07:43.074 real 0m17.032s 00:07:43.074 user 0m29.322s 00:07:43.074 sys 0m5.309s 00:07:43.074 17:34:42 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.074 17:34:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.074 ************************************ 00:07:43.074 END TEST cpu_locks 00:07:43.074 ************************************ 00:07:43.074 00:07:43.074 real 0m43.107s 00:07:43.074 user 1m25.622s 00:07:43.074 sys 0m8.728s 00:07:43.074 17:34:42 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.074 17:34:42 event -- common/autotest_common.sh@10 -- # set +x 00:07:43.074 ************************************ 00:07:43.074 END TEST event 00:07:43.074 ************************************ 00:07:43.074 17:34:42 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:43.074 17:34:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.074 17:34:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.074 17:34:42 -- common/autotest_common.sh@10 -- # set +x 00:07:43.334 ************************************ 00:07:43.334 START TEST thread 00:07:43.334 ************************************ 00:07:43.334 17:34:43 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:43.334 * Looking for test storage... 00:07:43.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:43.334 17:34:43 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:43.334 17:34:43 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:43.334 17:34:43 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:43.334 17:34:43 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:43.334 17:34:43 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.334 17:34:43 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.334 17:34:43 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.334 17:34:43 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.334 17:34:43 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.335 17:34:43 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.335 17:34:43 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.335 17:34:43 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.335 17:34:43 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.335 17:34:43 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.335 17:34:43 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.335 17:34:43 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:43.335 17:34:43 thread -- scripts/common.sh@345 -- # : 1 00:07:43.335 17:34:43 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.335 17:34:43 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.335 17:34:43 thread -- scripts/common.sh@365 -- # decimal 1 00:07:43.335 17:34:43 thread -- scripts/common.sh@353 -- # local d=1 00:07:43.335 17:34:43 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.335 17:34:43 thread -- scripts/common.sh@355 -- # echo 1 00:07:43.335 17:34:43 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.335 17:34:43 thread -- scripts/common.sh@366 -- # decimal 2 00:07:43.335 17:34:43 thread -- scripts/common.sh@353 -- # local d=2 00:07:43.335 17:34:43 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.335 17:34:43 thread -- scripts/common.sh@355 -- # echo 2 00:07:43.335 17:34:43 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.335 17:34:43 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.335 17:34:43 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.335 17:34:43 thread -- scripts/common.sh@368 -- # return 0 00:07:43.335 17:34:43 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.335 17:34:43 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:43.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.335 --rc genhtml_branch_coverage=1 00:07:43.335 --rc genhtml_function_coverage=1 00:07:43.335 --rc genhtml_legend=1 00:07:43.335 --rc geninfo_all_blocks=1 00:07:43.335 --rc geninfo_unexecuted_blocks=1 00:07:43.335 00:07:43.335 ' 00:07:43.335 17:34:43 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:43.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.335 --rc genhtml_branch_coverage=1 00:07:43.335 --rc genhtml_function_coverage=1 00:07:43.335 --rc genhtml_legend=1 00:07:43.335 --rc geninfo_all_blocks=1 00:07:43.335 --rc geninfo_unexecuted_blocks=1 00:07:43.335 00:07:43.335 ' 00:07:43.335 17:34:43 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:43.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.335 --rc genhtml_branch_coverage=1 00:07:43.335 --rc genhtml_function_coverage=1 00:07:43.335 --rc genhtml_legend=1 00:07:43.335 --rc geninfo_all_blocks=1 00:07:43.335 --rc geninfo_unexecuted_blocks=1 00:07:43.335 00:07:43.335 ' 00:07:43.335 17:34:43 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:43.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.335 --rc genhtml_branch_coverage=1 00:07:43.335 --rc genhtml_function_coverage=1 00:07:43.335 --rc genhtml_legend=1 00:07:43.335 --rc geninfo_all_blocks=1 00:07:43.335 --rc geninfo_unexecuted_blocks=1 00:07:43.335 00:07:43.335 ' 00:07:43.335 17:34:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:43.335 17:34:43 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:43.335 17:34:43 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.335 17:34:43 thread -- common/autotest_common.sh@10 -- # set +x 00:07:43.335 ************************************ 00:07:43.335 START TEST thread_poller_perf 00:07:43.335 ************************************ 00:07:43.335 17:34:43 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:43.595 [2024-11-20 17:34:43.266923] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:43.595 [2024-11-20 17:34:43.267020] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434813 ] 00:07:43.595 [2024-11-20 17:34:43.346482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.595 [2024-11-20 17:34:43.386264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.595 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:44.536 [2024-11-20T16:34:44.452Z] ====================================== 00:07:44.536 [2024-11-20T16:34:44.452Z] busy:2406227464 (cyc) 00:07:44.536 [2024-11-20T16:34:44.452Z] total_run_count: 419000 00:07:44.536 [2024-11-20T16:34:44.452Z] tsc_hz: 2400000000 (cyc) 00:07:44.536 [2024-11-20T16:34:44.452Z] ====================================== 00:07:44.536 [2024-11-20T16:34:44.452Z] poller_cost: 5742 (cyc), 2392 (nsec) 00:07:44.536 00:07:44.536 real 0m1.180s 00:07:44.536 user 0m1.082s 00:07:44.536 sys 0m0.093s 00:07:44.536 17:34:44 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.536 17:34:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:44.536 ************************************ 00:07:44.536 END TEST thread_poller_perf 00:07:44.536 ************************************ 00:07:44.825 17:34:44 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:44.825 17:34:44 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:44.825 17:34:44 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.825 17:34:44 thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.825 ************************************ 00:07:44.825 START TEST thread_poller_perf 00:07:44.825 ************************************ 00:07:44.825 17:34:44 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:44.825 [2024-11-20 17:34:44.525652] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:44.825 [2024-11-20 17:34:44.525742] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2435164 ] 00:07:44.825 [2024-11-20 17:34:44.609302] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.825 [2024-11-20 17:34:44.648844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.825 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:46.209 [2024-11-20T16:34:46.125Z] ====================================== 00:07:46.209 [2024-11-20T16:34:46.125Z] busy:2401529102 (cyc) 00:07:46.209 [2024-11-20T16:34:46.125Z] total_run_count: 5555000 00:07:46.209 [2024-11-20T16:34:46.125Z] tsc_hz: 2400000000 (cyc) 00:07:46.209 [2024-11-20T16:34:46.125Z] ====================================== 00:07:46.209 [2024-11-20T16:34:46.125Z] poller_cost: 432 (cyc), 180 (nsec) 00:07:46.209 00:07:46.209 real 0m1.179s 00:07:46.209 user 0m1.077s 00:07:46.209 sys 0m0.098s 00:07:46.209 17:34:45 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.209 17:34:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:46.209 ************************************ 00:07:46.209 END TEST thread_poller_perf 00:07:46.209 ************************************ 00:07:46.209 17:34:45 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:46.209 00:07:46.209 real 0m2.714s 00:07:46.209 user 0m2.345s 00:07:46.209 sys 0m0.381s 00:07:46.209 17:34:45 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.209 17:34:45 thread -- common/autotest_common.sh@10 -- # set +x 00:07:46.209 ************************************ 00:07:46.209 END TEST thread 00:07:46.209 ************************************ 00:07:46.209 17:34:45 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:46.209 17:34:45 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:46.209 17:34:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.209 17:34:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.209 17:34:45 -- common/autotest_common.sh@10 -- # set +x 00:07:46.209 ************************************ 00:07:46.209 START TEST app_cmdline 00:07:46.209 ************************************ 00:07:46.209 17:34:45 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:46.209 * Looking for test storage... 00:07:46.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:46.209 17:34:45 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:46.209 17:34:45 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:46.209 17:34:45 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:46.209 17:34:45 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.209 17:34:45 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:46.209 17:34:46 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.209 17:34:46 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:46.209 17:34:46 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:46.210 17:34:46 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.210 17:34:46 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:46.210 17:34:46 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.210 17:34:46 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.210 17:34:46 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.210 17:34:46 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:46.210 17:34:46 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.210 17:34:46 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:46.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.210 --rc genhtml_branch_coverage=1 00:07:46.210 --rc genhtml_function_coverage=1 00:07:46.210 --rc genhtml_legend=1 00:07:46.210 --rc geninfo_all_blocks=1 00:07:46.210 --rc geninfo_unexecuted_blocks=1 00:07:46.210 00:07:46.210 ' 00:07:46.210 17:34:46 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:46.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.210 --rc genhtml_branch_coverage=1 00:07:46.210 --rc genhtml_function_coverage=1 00:07:46.210 --rc genhtml_legend=1 00:07:46.210 --rc geninfo_all_blocks=1 00:07:46.210 --rc geninfo_unexecuted_blocks=1 00:07:46.210 00:07:46.210 ' 00:07:46.210 17:34:46 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:46.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.210 --rc genhtml_branch_coverage=1 00:07:46.210 --rc genhtml_function_coverage=1 00:07:46.210 --rc genhtml_legend=1 00:07:46.210 --rc geninfo_all_blocks=1 00:07:46.210 --rc geninfo_unexecuted_blocks=1 00:07:46.210 00:07:46.210 ' 00:07:46.210 17:34:46 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:46.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.210 --rc genhtml_branch_coverage=1 00:07:46.210 --rc genhtml_function_coverage=1 00:07:46.210 --rc genhtml_legend=1 00:07:46.210 --rc geninfo_all_blocks=1 00:07:46.210 --rc geninfo_unexecuted_blocks=1 00:07:46.210 00:07:46.210 ' 00:07:46.210 17:34:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:46.210 17:34:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2435533 00:07:46.210 17:34:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2435533 00:07:46.210 17:34:46 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2435533 ']' 00:07:46.210 17:34:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:46.210 17:34:46 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.210 17:34:46 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.210 17:34:46 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.210 17:34:46 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.210 17:34:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:46.210 [2024-11-20 17:34:46.067938] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:46.210 [2024-11-20 17:34:46.067997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2435533 ] 00:07:46.210 [2024-11-20 17:34:46.112779] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.470 [2024-11-20 17:34:46.144039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.470 17:34:46 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.470 17:34:46 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:46.470 17:34:46 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:46.730 { 00:07:46.730 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:46.730 "fields": { 00:07:46.730 "major": 24, 00:07:46.730 "minor": 9, 00:07:46.730 "patch": 1, 00:07:46.730 "suffix": "-pre", 00:07:46.730 "commit": "b18e1bd62" 00:07:46.730 } 00:07:46.730 } 00:07:46.730 17:34:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:46.730 17:34:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:46.730 17:34:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:46.730 17:34:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:46.730 17:34:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:46.730 17:34:46 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.730 17:34:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:46.730 17:34:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:46.730 17:34:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:46.730 17:34:46 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.730 17:34:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:46.730 17:34:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:46.730 17:34:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:46.730 17:34:46 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:46.730 17:34:46 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:46.730 17:34:46 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.730 17:34:46 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.730 17:34:46 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.730 17:34:46 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.730 17:34:46 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.730 17:34:46 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.730 17:34:46 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.730 17:34:46 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:46.730 17:34:46 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:46.992 request: 00:07:46.992 { 00:07:46.992 "method": "env_dpdk_get_mem_stats", 00:07:46.992 "req_id": 1 00:07:46.992 } 00:07:46.992 Got JSON-RPC error response 00:07:46.992 response: 00:07:46.992 { 00:07:46.992 "code": -32601, 00:07:46.992 "message": "Method not found" 00:07:46.992 } 00:07:46.992 17:34:46 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:46.992 17:34:46 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:46.992 17:34:46 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:46.992 17:34:46 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:46.992 17:34:46 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2435533 00:07:46.992 17:34:46 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2435533 ']' 00:07:46.992 17:34:46 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2435533 00:07:46.992 17:34:46 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:46.992 17:34:46 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.992 17:34:46 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2435533 00:07:46.992 17:34:46 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.992 17:34:46 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.992 17:34:46 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2435533' 00:07:46.992 killing process with pid 2435533 00:07:46.992 17:34:46 app_cmdline -- common/autotest_common.sh@969 -- # kill 2435533 00:07:46.992 17:34:46 app_cmdline -- common/autotest_common.sh@974 -- # wait 2435533 00:07:47.253 00:07:47.253 real 0m1.152s 00:07:47.253 user 0m1.386s 00:07:47.253 sys 0m0.414s 00:07:47.253 17:34:46 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.253 17:34:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:47.253 ************************************ 00:07:47.253 END TEST app_cmdline 00:07:47.253 ************************************ 00:07:47.253 17:34:46 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:47.253 17:34:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.253 17:34:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.253 17:34:46 -- common/autotest_common.sh@10 -- # set +x 00:07:47.253 ************************************ 00:07:47.253 START TEST version 00:07:47.253 ************************************ 00:07:47.253 17:34:47 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:47.253 * Looking for test storage... 00:07:47.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:47.253 17:34:47 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:47.253 17:34:47 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:47.253 17:34:47 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:47.515 17:34:47 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:47.515 17:34:47 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.515 17:34:47 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.515 17:34:47 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.515 17:34:47 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.515 17:34:47 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.515 17:34:47 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.515 17:34:47 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.515 17:34:47 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.515 17:34:47 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.515 17:34:47 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.515 17:34:47 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.515 17:34:47 version -- scripts/common.sh@344 -- # case "$op" in 00:07:47.515 17:34:47 version -- scripts/common.sh@345 -- # : 1 00:07:47.515 17:34:47 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.515 17:34:47 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.515 17:34:47 version -- scripts/common.sh@365 -- # decimal 1 00:07:47.515 17:34:47 version -- scripts/common.sh@353 -- # local d=1 00:07:47.515 17:34:47 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.515 17:34:47 version -- scripts/common.sh@355 -- # echo 1 00:07:47.515 17:34:47 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.515 17:34:47 version -- scripts/common.sh@366 -- # decimal 2 00:07:47.515 17:34:47 version -- scripts/common.sh@353 -- # local d=2 00:07:47.515 17:34:47 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.515 17:34:47 version -- scripts/common.sh@355 -- # echo 2 00:07:47.515 17:34:47 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.515 17:34:47 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.515 17:34:47 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.515 17:34:47 version -- scripts/common.sh@368 -- # return 0 00:07:47.515 17:34:47 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.515 17:34:47 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:47.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.515 --rc genhtml_branch_coverage=1 00:07:47.515 --rc genhtml_function_coverage=1 00:07:47.515 --rc genhtml_legend=1 00:07:47.515 --rc geninfo_all_blocks=1 00:07:47.515 --rc geninfo_unexecuted_blocks=1 00:07:47.515 00:07:47.515 ' 00:07:47.515 17:34:47 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:47.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.515 --rc genhtml_branch_coverage=1 00:07:47.515 --rc genhtml_function_coverage=1 00:07:47.515 --rc genhtml_legend=1 00:07:47.515 --rc geninfo_all_blocks=1 00:07:47.515 --rc geninfo_unexecuted_blocks=1 00:07:47.515 00:07:47.515 ' 00:07:47.515 17:34:47 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:47.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.515 --rc genhtml_branch_coverage=1 00:07:47.515 --rc genhtml_function_coverage=1 00:07:47.515 --rc genhtml_legend=1 00:07:47.515 --rc geninfo_all_blocks=1 00:07:47.515 --rc geninfo_unexecuted_blocks=1 00:07:47.515 00:07:47.515 ' 00:07:47.515 17:34:47 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:47.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.515 --rc genhtml_branch_coverage=1 00:07:47.515 --rc genhtml_function_coverage=1 00:07:47.515 --rc genhtml_legend=1 00:07:47.515 --rc geninfo_all_blocks=1 00:07:47.515 --rc geninfo_unexecuted_blocks=1 00:07:47.515 00:07:47.515 ' 00:07:47.515 17:34:47 version -- app/version.sh@17 -- # get_header_version major 00:07:47.515 17:34:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:47.515 17:34:47 version -- app/version.sh@14 -- # cut -f2 00:07:47.515 17:34:47 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.515 17:34:47 version -- app/version.sh@17 -- # major=24 00:07:47.515 17:34:47 version -- app/version.sh@18 -- # get_header_version minor 00:07:47.515 17:34:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:47.515 17:34:47 version -- app/version.sh@14 -- # cut -f2 00:07:47.515 17:34:47 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.515 17:34:47 version -- app/version.sh@18 -- # minor=9 00:07:47.515 17:34:47 version -- app/version.sh@19 -- # get_header_version patch 00:07:47.515 17:34:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:47.515 17:34:47 version -- app/version.sh@14 -- # cut -f2 00:07:47.515 17:34:47 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.515 17:34:47 version -- app/version.sh@19 -- # patch=1 00:07:47.515 17:34:47 version -- app/version.sh@20 -- # get_header_version suffix 00:07:47.515 17:34:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:47.515 17:34:47 version -- app/version.sh@14 -- # cut -f2 00:07:47.515 17:34:47 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.515 17:34:47 version -- app/version.sh@20 -- # suffix=-pre 00:07:47.515 17:34:47 version -- app/version.sh@22 -- # version=24.9 00:07:47.515 17:34:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:47.515 17:34:47 version -- app/version.sh@25 -- # version=24.9.1 00:07:47.515 17:34:47 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:47.515 17:34:47 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:47.515 17:34:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:47.515 17:34:47 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:47.515 17:34:47 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:47.515 00:07:47.515 real 0m0.279s 00:07:47.515 user 0m0.164s 00:07:47.515 sys 0m0.163s 00:07:47.515 17:34:47 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.515 17:34:47 version -- common/autotest_common.sh@10 -- # set +x 00:07:47.515 ************************************ 00:07:47.515 END TEST version 00:07:47.515 ************************************ 00:07:47.515 17:34:47 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:47.515 17:34:47 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:47.515 17:34:47 -- spdk/autotest.sh@194 -- # uname -s 00:07:47.515 17:34:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:47.515 17:34:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:47.515 17:34:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:47.515 17:34:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:47.515 17:34:47 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:47.515 17:34:47 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:47.515 17:34:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:47.515 17:34:47 -- common/autotest_common.sh@10 -- # set +x 00:07:47.515 17:34:47 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:47.515 17:34:47 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:47.515 17:34:47 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:47.515 17:34:47 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:47.515 17:34:47 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:47.515 17:34:47 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:47.515 17:34:47 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:47.515 17:34:47 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:47.515 17:34:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.515 17:34:47 -- common/autotest_common.sh@10 -- # set +x 00:07:47.777 ************************************ 00:07:47.777 START TEST nvmf_tcp 00:07:47.777 ************************************ 00:07:47.777 17:34:47 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:47.777 * Looking for test storage... 00:07:47.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:47.777 17:34:47 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:47.777 17:34:47 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:47.777 17:34:47 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:47.777 17:34:47 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.777 17:34:47 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:47.777 17:34:47 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.777 17:34:47 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:47.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.777 --rc genhtml_branch_coverage=1 00:07:47.777 --rc genhtml_function_coverage=1 00:07:47.777 --rc genhtml_legend=1 00:07:47.777 --rc geninfo_all_blocks=1 00:07:47.777 --rc geninfo_unexecuted_blocks=1 00:07:47.777 00:07:47.777 ' 00:07:47.777 17:34:47 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:47.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.777 --rc genhtml_branch_coverage=1 00:07:47.777 --rc genhtml_function_coverage=1 00:07:47.777 --rc genhtml_legend=1 00:07:47.777 --rc geninfo_all_blocks=1 00:07:47.777 --rc geninfo_unexecuted_blocks=1 00:07:47.777 00:07:47.777 ' 00:07:47.777 17:34:47 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:47.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.777 --rc genhtml_branch_coverage=1 00:07:47.777 --rc genhtml_function_coverage=1 00:07:47.777 --rc genhtml_legend=1 00:07:47.777 --rc geninfo_all_blocks=1 00:07:47.777 --rc geninfo_unexecuted_blocks=1 00:07:47.777 00:07:47.777 ' 00:07:47.777 17:34:47 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:47.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.777 --rc genhtml_branch_coverage=1 00:07:47.777 --rc genhtml_function_coverage=1 00:07:47.777 --rc genhtml_legend=1 00:07:47.777 --rc geninfo_all_blocks=1 00:07:47.777 --rc geninfo_unexecuted_blocks=1 00:07:47.777 00:07:47.777 ' 00:07:47.777 17:34:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:47.777 17:34:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:47.777 17:34:47 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:47.777 17:34:47 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:47.777 17:34:47 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.777 17:34:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:48.039 ************************************ 00:07:48.039 START TEST nvmf_target_core 00:07:48.039 ************************************ 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:48.039 * Looking for test storage... 00:07:48.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.039 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:48.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.040 --rc genhtml_branch_coverage=1 00:07:48.040 --rc genhtml_function_coverage=1 00:07:48.040 --rc genhtml_legend=1 00:07:48.040 --rc geninfo_all_blocks=1 00:07:48.040 --rc geninfo_unexecuted_blocks=1 00:07:48.040 00:07:48.040 ' 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:48.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.040 --rc genhtml_branch_coverage=1 00:07:48.040 --rc genhtml_function_coverage=1 00:07:48.040 --rc genhtml_legend=1 00:07:48.040 --rc geninfo_all_blocks=1 00:07:48.040 --rc geninfo_unexecuted_blocks=1 00:07:48.040 00:07:48.040 ' 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:48.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.040 --rc genhtml_branch_coverage=1 00:07:48.040 --rc genhtml_function_coverage=1 00:07:48.040 --rc genhtml_legend=1 00:07:48.040 --rc geninfo_all_blocks=1 00:07:48.040 --rc geninfo_unexecuted_blocks=1 00:07:48.040 00:07:48.040 ' 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:48.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.040 --rc genhtml_branch_coverage=1 00:07:48.040 --rc genhtml_function_coverage=1 00:07:48.040 --rc genhtml_legend=1 00:07:48.040 --rc geninfo_all_blocks=1 00:07:48.040 --rc geninfo_unexecuted_blocks=1 00:07:48.040 00:07:48.040 ' 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:48.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.040 17:34:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:48.301 ************************************ 00:07:48.301 START TEST nvmf_abort 00:07:48.301 ************************************ 00:07:48.301 17:34:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:48.301 * Looking for test storage... 00:07:48.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.301 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:48.301 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:07:48.301 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:48.301 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:48.301 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:48.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.302 --rc genhtml_branch_coverage=1 00:07:48.302 --rc genhtml_function_coverage=1 00:07:48.302 --rc genhtml_legend=1 00:07:48.302 --rc geninfo_all_blocks=1 00:07:48.302 --rc geninfo_unexecuted_blocks=1 00:07:48.302 00:07:48.302 ' 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:48.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.302 --rc genhtml_branch_coverage=1 00:07:48.302 --rc genhtml_function_coverage=1 00:07:48.302 --rc genhtml_legend=1 00:07:48.302 --rc geninfo_all_blocks=1 00:07:48.302 --rc geninfo_unexecuted_blocks=1 00:07:48.302 00:07:48.302 ' 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:48.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.302 --rc genhtml_branch_coverage=1 00:07:48.302 --rc genhtml_function_coverage=1 00:07:48.302 --rc genhtml_legend=1 00:07:48.302 --rc geninfo_all_blocks=1 00:07:48.302 --rc geninfo_unexecuted_blocks=1 00:07:48.302 00:07:48.302 ' 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:48.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.302 --rc genhtml_branch_coverage=1 00:07:48.302 --rc genhtml_function_coverage=1 00:07:48.302 --rc genhtml_legend=1 00:07:48.302 --rc geninfo_all_blocks=1 00:07:48.302 --rc geninfo_unexecuted_blocks=1 00:07:48.302 00:07:48.302 ' 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:48.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:48.302 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:48.303 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:48.303 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:48.564 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:48.564 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:48.564 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:48.564 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:48.564 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.564 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:48.564 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:48.564 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:48.564 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.564 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.564 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.564 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:48.564 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:48.564 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:48.564 17:34:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:56.699 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.699 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:56.699 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:56.699 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:56.699 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:56.699 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:56.699 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:56.699 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:56.699 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:56.699 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:56.699 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:56.699 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:56.699 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:56.699 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:56.699 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:56.700 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:56.700 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:56.700 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:56.700 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:56.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:07:56.700 00:07:56.700 --- 10.0.0.2 ping statistics --- 00:07:56.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.700 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:07:56.700 00:07:56.700 --- 10.0.0.1 ping statistics --- 00:07:56.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.700 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=2439730 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 2439730 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:56.700 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2439730 ']' 00:07:56.701 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.701 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.701 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.701 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.701 17:34:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:56.701 [2024-11-20 17:34:55.820012] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:56.701 [2024-11-20 17:34:55.820074] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.701 [2024-11-20 17:34:55.910666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.701 [2024-11-20 17:34:55.960383] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.701 [2024-11-20 17:34:55.960443] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.701 [2024-11-20 17:34:55.960452] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.701 [2024-11-20 17:34:55.960459] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.701 [2024-11-20 17:34:55.960468] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.701 [2024-11-20 17:34:55.960643] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.701 [2024-11-20 17:34:55.960788] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.701 [2024-11-20 17:34:55.960787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.961 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.961 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:56.961 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:56.961 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:56.961 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:56.961 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.961 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:56.961 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.961 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:56.961 [2024-11-20 17:34:56.703696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:56.962 Malloc0 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:56.962 Delay0 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:56.962 [2024-11-20 17:34:56.785899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.962 17:34:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:56.962 [2024-11-20 17:34:56.874655] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:59.509 Initializing NVMe Controllers 00:07:59.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:59.509 controller IO queue size 128 less than required 00:07:59.509 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:59.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:59.509 Initialization complete. Launching workers. 00:07:59.509 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28428 00:07:59.509 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28489, failed to submit 62 00:07:59.509 success 28432, unsuccessful 57, failed 0 00:07:59.509 17:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:59.509 17:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.509 17:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:59.509 17:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.509 17:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:59.509 17:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:59.509 17:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:59.509 17:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:59.509 17:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.509 17:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:59.509 17:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.509 17:34:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.509 rmmod nvme_tcp 00:07:59.509 rmmod nvme_fabrics 00:07:59.509 rmmod nvme_keyring 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 2439730 ']' 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 2439730 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2439730 ']' 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2439730 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2439730 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2439730' 00:07:59.509 killing process with pid 2439730 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2439730 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2439730 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.509 17:34:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.422 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:01.422 00:08:01.422 real 0m13.312s 00:08:01.422 user 0m13.656s 00:08:01.422 sys 0m6.614s 00:08:01.422 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.422 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:01.422 ************************************ 00:08:01.422 END TEST nvmf_abort 00:08:01.422 ************************************ 00:08:01.422 17:35:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:01.422 17:35:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:01.422 17:35:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.422 17:35:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:01.684 ************************************ 00:08:01.684 START TEST nvmf_ns_hotplug_stress 00:08:01.684 ************************************ 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:01.684 * Looking for test storage... 00:08:01.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:01.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.684 --rc genhtml_branch_coverage=1 00:08:01.684 --rc genhtml_function_coverage=1 00:08:01.684 --rc genhtml_legend=1 00:08:01.684 --rc geninfo_all_blocks=1 00:08:01.684 --rc geninfo_unexecuted_blocks=1 00:08:01.684 00:08:01.684 ' 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:01.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.684 --rc genhtml_branch_coverage=1 00:08:01.684 --rc genhtml_function_coverage=1 00:08:01.684 --rc genhtml_legend=1 00:08:01.684 --rc geninfo_all_blocks=1 00:08:01.684 --rc geninfo_unexecuted_blocks=1 00:08:01.684 00:08:01.684 ' 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:01.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.684 --rc genhtml_branch_coverage=1 00:08:01.684 --rc genhtml_function_coverage=1 00:08:01.684 --rc genhtml_legend=1 00:08:01.684 --rc geninfo_all_blocks=1 00:08:01.684 --rc geninfo_unexecuted_blocks=1 00:08:01.684 00:08:01.684 ' 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:01.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.684 --rc genhtml_branch_coverage=1 00:08:01.684 --rc genhtml_function_coverage=1 00:08:01.684 --rc genhtml_legend=1 00:08:01.684 --rc geninfo_all_blocks=1 00:08:01.684 --rc geninfo_unexecuted_blocks=1 00:08:01.684 00:08:01.684 ' 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.684 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:08:01.946 17:35:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.085 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:10.086 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:10.086 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:10.086 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:10.086 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.086 17:35:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:10.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:08:10.086 00:08:10.086 --- 10.0.0.2 ping statistics --- 00:08:10.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.086 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:08:10.086 00:08:10.086 --- 10.0.0.1 ping statistics --- 00:08:10.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.086 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=2444770 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 2444770 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:10.086 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2444770 ']' 00:08:10.087 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.087 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.087 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.087 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.087 17:35:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:10.087 [2024-11-20 17:35:09.251986] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:10.087 [2024-11-20 17:35:09.252056] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.087 [2024-11-20 17:35:09.341986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:10.087 [2024-11-20 17:35:09.389531] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.087 [2024-11-20 17:35:09.389591] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.087 [2024-11-20 17:35:09.389601] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.087 [2024-11-20 17:35:09.389608] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.087 [2024-11-20 17:35:09.389614] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.087 [2024-11-20 17:35:09.389779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.087 [2024-11-20 17:35:09.389918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.087 [2024-11-20 17:35:09.389919] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.348 17:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.348 17:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:08:10.348 17:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:10.348 17:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:10.348 17:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:10.348 17:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.348 17:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:10.348 17:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:10.609 [2024-11-20 17:35:10.294533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.609 17:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:10.870 17:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.870 [2024-11-20 17:35:10.708322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.870 17:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:11.131 17:35:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:11.392 Malloc0 00:08:11.392 17:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:11.652 Delay0 00:08:11.652 17:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.652 17:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:11.913 NULL1 00:08:11.913 17:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:12.174 17:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2445283 00:08:12.174 17:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:12.174 17:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:12.174 17:35:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.435 17:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.435 17:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:12.435 17:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:12.695 true 00:08:12.695 17:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:12.695 17:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.955 17:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.955 17:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:12.955 17:35:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:13.215 true 00:08:13.215 17:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:13.215 17:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.474 17:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.474 17:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:13.474 17:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:13.734 true 00:08:13.734 17:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:13.734 17:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.995 17:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.256 17:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:14.256 17:35:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:14.256 true 00:08:14.256 17:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:14.256 17:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.516 17:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.776 17:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:14.776 17:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:14.776 true 00:08:14.776 17:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:14.776 17:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.036 17:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.296 17:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:15.296 17:35:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:15.296 true 00:08:15.296 17:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:15.296 17:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.555 17:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.815 17:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:15.815 17:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:15.815 true 00:08:16.075 17:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:16.075 17:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.075 17:35:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.335 17:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:16.335 17:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:16.594 true 00:08:16.594 17:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:16.594 17:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.594 17:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.853 17:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:16.853 17:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:17.113 true 00:08:17.113 17:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:17.113 17:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.113 17:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.372 17:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:17.372 17:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:17.632 true 00:08:17.632 17:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:17.632 17:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.891 17:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.892 17:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:17.892 17:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:18.151 true 00:08:18.151 17:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:18.151 17:35:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.411 17:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.411 17:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:18.411 17:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:18.671 true 00:08:18.671 17:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:18.671 17:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.931 17:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.191 17:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:19.191 17:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:19.191 true 00:08:19.191 17:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:19.191 17:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.450 17:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.710 17:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:19.710 17:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:19.710 true 00:08:19.710 17:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:19.710 17:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.969 17:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.229 17:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:20.229 17:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:20.229 true 00:08:20.489 17:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:20.489 17:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.489 17:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.750 17:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:20.750 17:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:21.010 true 00:08:21.010 17:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:21.010 17:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.010 17:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.270 17:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:21.270 17:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:21.530 true 00:08:21.530 17:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:21.530 17:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.790 17:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.790 17:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:21.790 17:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:22.051 true 00:08:22.051 17:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:22.051 17:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.313 17:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.313 17:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:22.313 17:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:22.573 true 00:08:22.573 17:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:22.573 17:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.833 17:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.094 17:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:23.094 17:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:23.094 true 00:08:23.094 17:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:23.094 17:35:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.355 17:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.615 17:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:23.615 17:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:23.615 true 00:08:23.615 17:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:23.615 17:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.874 17:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.136 17:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:24.136 17:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:24.136 true 00:08:24.136 17:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:24.136 17:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.397 17:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.658 17:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:24.658 17:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:24.658 true 00:08:24.919 17:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:24.919 17:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.919 17:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.180 17:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:25.180 17:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:25.180 true 00:08:25.439 17:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:25.439 17:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.439 17:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.700 17:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:25.700 17:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:25.960 true 00:08:25.961 17:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:25.961 17:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.961 17:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.220 17:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:26.220 17:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:26.480 true 00:08:26.480 17:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:26.480 17:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.480 17:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.741 17:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:26.741 17:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:27.002 true 00:08:27.002 17:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:27.002 17:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.264 17:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.264 17:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:27.264 17:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:27.524 true 00:08:27.524 17:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:27.524 17:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.783 17:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.783 17:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:27.783 17:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:28.043 true 00:08:28.043 17:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:28.043 17:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.304 17:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.304 17:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:28.304 17:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:28.564 true 00:08:28.564 17:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:28.564 17:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.824 17:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.824 17:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:08:28.824 17:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:08:29.084 true 00:08:29.084 17:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:29.084 17:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.345 17:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.345 17:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:08:29.345 17:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:08:29.605 true 00:08:29.605 17:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:29.605 17:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.866 17:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.867 17:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:08:29.867 17:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:08:30.127 true 00:08:30.127 17:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:30.127 17:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.386 17:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.386 17:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:08:30.386 17:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:08:30.647 true 00:08:30.647 17:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:30.647 17:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.913 17:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.913 17:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:08:30.913 17:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:08:31.172 true 00:08:31.172 17:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:31.172 17:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.432 17:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.432 17:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:08:31.432 17:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:08:31.693 true 00:08:31.693 17:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:31.693 17:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.953 17:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.214 17:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:08:32.214 17:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:08:32.214 true 00:08:32.214 17:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:32.214 17:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.474 17:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.474 17:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:08:32.474 17:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:08:32.735 true 00:08:32.735 17:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:32.735 17:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.994 17:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.995 17:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:08:32.995 17:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:08:33.255 true 00:08:33.255 17:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:33.255 17:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.516 17:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.516 17:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:08:33.516 17:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:08:33.777 true 00:08:33.777 17:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:33.777 17:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.039 17:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.039 17:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:08:34.039 17:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:08:34.299 true 00:08:34.299 17:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:34.299 17:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.559 17:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.559 17:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:08:34.819 17:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:08:34.819 true 00:08:34.819 17:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:34.819 17:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.079 17:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.079 17:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:08:35.079 17:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:08:35.340 true 00:08:35.340 17:35:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:35.340 17:35:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.601 17:35:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.601 17:35:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:08:35.601 17:35:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:08:35.861 true 00:08:35.862 17:35:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:35.862 17:35:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.122 17:35:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.122 17:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:08:36.122 17:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:08:36.383 true 00:08:36.383 17:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:36.383 17:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.644 17:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.644 17:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:08:36.644 17:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:08:36.905 true 00:08:36.905 17:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:36.905 17:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.165 17:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.165 17:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:08:37.165 17:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:08:37.426 true 00:08:37.426 17:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:37.426 17:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.686 17:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.686 17:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:08:37.686 17:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:08:37.947 true 00:08:37.947 17:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:37.947 17:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.207 17:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.207 17:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:08:38.207 17:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:08:38.468 true 00:08:38.468 17:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:38.468 17:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.792 17:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.098 17:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:08:39.098 17:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:08:39.098 true 00:08:39.098 17:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:39.098 17:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.420 17:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.420 17:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:08:39.420 17:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:08:39.699 true 00:08:39.699 17:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:39.699 17:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.699 17:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.959 17:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:08:39.959 17:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:08:40.220 true 00:08:40.220 17:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:40.220 17:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.483 17:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.483 17:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:08:40.483 17:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:08:40.745 true 00:08:40.745 17:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:40.745 17:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.007 17:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.007 17:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:08:41.007 17:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:08:41.268 true 00:08:41.268 17:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:41.268 17:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.528 17:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.528 17:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:08:41.529 17:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:08:41.789 true 00:08:41.789 17:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:41.789 17:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.050 17:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.311 17:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:08:42.311 17:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:08:42.311 true 00:08:42.311 17:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:42.311 17:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.572 Initializing NVMe Controllers 00:08:42.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:42.572 Controller IO queue size 128, less than required. 00:08:42.572 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:42.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:42.572 Initialization complete. Launching workers. 00:08:42.572 ======================================================== 00:08:42.572 Latency(us) 00:08:42.572 Device Information : IOPS MiB/s Average min max 00:08:42.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30969.33 15.12 4133.13 1123.81 7829.17 00:08:42.572 ======================================================== 00:08:42.572 Total : 30969.33 15.12 4133.13 1123.81 7829.17 00:08:42.572 00:08:42.572 17:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.834 17:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:08:42.834 17:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:08:42.834 true 00:08:42.834 17:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2445283 00:08:42.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2445283) - No such process 00:08:42.834 17:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2445283 00:08:42.834 17:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.095 17:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:43.356 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:43.356 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:43.356 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:43.356 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:43.356 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:43.356 null0 00:08:43.356 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:43.356 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:43.356 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:43.617 null1 00:08:43.617 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:43.617 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:43.617 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:43.877 null2 00:08:43.878 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:43.878 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:43.878 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:43.878 null3 00:08:43.878 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:43.878 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:43.878 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:44.139 null4 00:08:44.139 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:44.139 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:44.139 17:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:44.399 null5 00:08:44.399 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:44.399 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:44.399 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:44.399 null6 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:44.660 null7 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:44.660 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:44.661 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:44.661 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2452041 2452043 2452044 2452046 2452048 2452050 2452052 2452053 00:08:44.661 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:44.661 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:44.661 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:44.661 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.661 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:44.921 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:44.921 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:44.922 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.922 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:44.922 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:44.922 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:44.922 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:44.922 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:45.182 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.182 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.182 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:45.182 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.182 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.182 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:45.182 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.182 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.182 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:45.182 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.182 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.182 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:45.182 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.183 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.183 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.183 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.183 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:45.183 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:45.183 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.183 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.183 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:45.183 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.183 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.183 17:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:45.183 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.444 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:45.705 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:45.966 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:46.226 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:46.226 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.226 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:46.226 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.226 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.226 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:46.226 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:46.226 17:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.226 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.226 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:46.226 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.226 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.226 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:46.226 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.226 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.226 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:46.226 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.226 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.226 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:46.226 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.226 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.226 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.487 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:46.748 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.748 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.748 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:46.748 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.748 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.748 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:46.748 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.748 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.748 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:46.748 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.748 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.748 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:46.748 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:46.748 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:46.749 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.749 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:46.749 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.749 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:46.749 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:46.749 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:47.010 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:47.270 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.270 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.270 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:47.270 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:47.270 17:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:47.270 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:47.270 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:47.270 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.270 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:47.270 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.270 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.270 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:47.270 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:47.270 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.270 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.271 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:47.271 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.271 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.271 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:47.532 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.792 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.792 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.792 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:47.792 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.793 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.793 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:47.793 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:47.793 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:47.793 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.793 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.793 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:47.793 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.793 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.793 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:47.793 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.793 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.793 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:47.793 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.793 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.793 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:47.793 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.054 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:48.314 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.314 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.314 17:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:48.314 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:48.314 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.314 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.314 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:48.314 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:48.314 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.314 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.314 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:48.314 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:48.314 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.314 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:48.314 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.314 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.575 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.575 rmmod nvme_tcp 00:08:48.836 rmmod nvme_fabrics 00:08:48.836 rmmod nvme_keyring 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 2444770 ']' 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 2444770 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2444770 ']' 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2444770 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2444770 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2444770' 00:08:48.836 killing process with pid 2444770 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2444770 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2444770 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.836 17:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.385 17:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:51.385 00:08:51.385 real 0m49.413s 00:08:51.385 user 3m20.991s 00:08:51.385 sys 0m17.448s 00:08:51.385 17:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.385 17:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.385 ************************************ 00:08:51.385 END TEST nvmf_ns_hotplug_stress 00:08:51.385 ************************************ 00:08:51.385 17:35:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:51.385 17:35:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:51.385 17:35:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.385 17:35:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.385 ************************************ 00:08:51.385 START TEST nvmf_delete_subsystem 00:08:51.385 ************************************ 00:08:51.385 17:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:51.385 * Looking for test storage... 00:08:51.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.385 17:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:51.385 17:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:08:51.385 17:35:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:51.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.385 --rc genhtml_branch_coverage=1 00:08:51.385 --rc genhtml_function_coverage=1 00:08:51.385 --rc genhtml_legend=1 00:08:51.385 --rc geninfo_all_blocks=1 00:08:51.385 --rc geninfo_unexecuted_blocks=1 00:08:51.385 00:08:51.385 ' 00:08:51.385 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:51.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.385 --rc genhtml_branch_coverage=1 00:08:51.385 --rc genhtml_function_coverage=1 00:08:51.385 --rc genhtml_legend=1 00:08:51.385 --rc geninfo_all_blocks=1 00:08:51.385 --rc geninfo_unexecuted_blocks=1 00:08:51.385 00:08:51.385 ' 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:51.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.386 --rc genhtml_branch_coverage=1 00:08:51.386 --rc genhtml_function_coverage=1 00:08:51.386 --rc genhtml_legend=1 00:08:51.386 --rc geninfo_all_blocks=1 00:08:51.386 --rc geninfo_unexecuted_blocks=1 00:08:51.386 00:08:51.386 ' 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:51.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.386 --rc genhtml_branch_coverage=1 00:08:51.386 --rc genhtml_function_coverage=1 00:08:51.386 --rc genhtml_legend=1 00:08:51.386 --rc geninfo_all_blocks=1 00:08:51.386 --rc geninfo_unexecuted_blocks=1 00:08:51.386 00:08:51.386 ' 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:51.386 17:35:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:59.529 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:59.529 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:59.529 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:59.529 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.529 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:59.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:08:59.530 00:08:59.530 --- 10.0.0.2 ping statistics --- 00:08:59.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.530 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:08:59.530 00:08:59.530 --- 10.0.0.1 ping statistics --- 00:08:59.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.530 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=2457225 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 2457225 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2457225 ']' 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.530 17:35:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.530 [2024-11-20 17:35:58.698822] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:59.530 [2024-11-20 17:35:58.698887] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.530 [2024-11-20 17:35:58.785591] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:59.530 [2024-11-20 17:35:58.832401] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.530 [2024-11-20 17:35:58.832459] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.530 [2024-11-20 17:35:58.832470] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.530 [2024-11-20 17:35:58.832481] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.530 [2024-11-20 17:35:58.832490] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.530 [2024-11-20 17:35:58.832655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.530 [2024-11-20 17:35:58.832659] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.791 [2024-11-20 17:35:59.575546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.791 [2024-11-20 17:35:59.599908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.791 NULL1 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.791 Delay0 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.791 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2457477 00:08:59.792 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:59.792 17:35:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:00.052 [2024-11-20 17:35:59.716923] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:01.969 17:36:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:01.969 17:36:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.969 17:36:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 [2024-11-20 17:36:01.858830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee0b50 is same with the state(6) to be set 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Write completed with error (sct=0, sc=8) 00:09:01.969 starting I/O failed: -6 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.969 Read completed with error (sct=0, sc=8) 00:09:01.970 starting I/O failed: -6 00:09:01.970 [2024-11-20 17:36:01.859876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd26000d450 is same with the state(6) to be set 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Write completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Write completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Write completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Write completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Write completed with error (sct=0, sc=8) 00:09:01.970 Write completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Write completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Write completed with error (sct=0, sc=8) 00:09:01.970 Write completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Write completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Write completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Write completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Write completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Write completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Write completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Write completed with error (sct=0, sc=8) 00:09:01.970 Write completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:01.970 Read completed with error (sct=0, sc=8) 00:09:02.913 [2024-11-20 17:36:02.815675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3a80 is same with the state(6) to be set 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 [2024-11-20 17:36:02.861385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd26000d780 is same with the state(6) to be set 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 [2024-11-20 17:36:02.861595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd26000cfe0 is same with the state(6) to be set 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 [2024-11-20 17:36:02.862983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee0820 is same with the state(6) to be set 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Read completed with error (sct=0, sc=8) 00:09:03.175 Write completed with error (sct=0, sc=8) 00:09:03.175 [2024-11-20 17:36:02.863305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee0e80 is same with the state(6) to be set 00:09:03.175 Initializing NVMe Controllers 00:09:03.175 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:03.175 Controller IO queue size 128, less than required. 00:09:03.175 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:03.175 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:03.175 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:03.175 Initialization complete. Launching workers. 00:09:03.175 ======================================================== 00:09:03.175 Latency(us) 00:09:03.175 Device Information : IOPS MiB/s Average min max 00:09:03.175 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.30 0.08 899465.41 324.31 1012579.03 00:09:03.175 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.38 0.08 1021578.55 284.28 2003170.34 00:09:03.175 ======================================================== 00:09:03.175 Total : 325.68 0.16 958474.35 284.28 2003170.34 00:09:03.175 00:09:03.175 [2024-11-20 17:36:02.863717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee3a80 (9): Bad file descriptor 00:09:03.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:03.175 17:36:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.175 17:36:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:03.175 17:36:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2457477 00:09:03.176 17:36:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2457477 00:09:03.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2457477) - No such process 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2457477 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2457477 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2457477 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:03.749 [2024-11-20 17:36:03.395061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.749 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.750 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:03.750 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.750 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2458361 00:09:03.750 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:03.750 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:03.750 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2458361 00:09:03.750 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:03.750 [2024-11-20 17:36:03.480478] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:04.010 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:04.010 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2458361 00:09:04.010 17:36:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:04.581 17:36:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:04.581 17:36:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2458361 00:09:04.581 17:36:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:05.151 17:36:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:05.151 17:36:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2458361 00:09:05.151 17:36:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:05.721 17:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:05.721 17:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2458361 00:09:05.722 17:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:06.292 17:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:06.292 17:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2458361 00:09:06.292 17:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:06.552 17:36:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:06.552 17:36:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2458361 00:09:06.552 17:36:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:06.813 Initializing NVMe Controllers 00:09:06.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:06.813 Controller IO queue size 128, less than required. 00:09:06.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:06.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:06.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:06.813 Initialization complete. Launching workers. 00:09:06.813 ======================================================== 00:09:06.813 Latency(us) 00:09:06.813 Device Information : IOPS MiB/s Average min max 00:09:06.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002558.34 1000132.17 1042408.27 00:09:06.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004227.13 1000152.50 1041897.11 00:09:06.813 ======================================================== 00:09:06.813 Total : 256.00 0.12 1003392.74 1000132.17 1042408.27 00:09:06.813 00:09:07.077 17:36:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:07.077 17:36:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2458361 00:09:07.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2458361) - No such process 00:09:07.077 17:36:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2458361 00:09:07.077 17:36:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:07.077 17:36:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:07.077 17:36:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:07.077 17:36:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:07.077 17:36:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:07.077 17:36:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:07.077 17:36:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.077 17:36:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:07.077 rmmod nvme_tcp 00:09:07.077 rmmod nvme_fabrics 00:09:07.077 rmmod nvme_keyring 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 2457225 ']' 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 2457225 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2457225 ']' 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2457225 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2457225 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2457225' 00:09:07.339 killing process with pid 2457225 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2457225 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2457225 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.339 17:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.892 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:09.892 00:09:09.892 real 0m18.397s 00:09:09.892 user 0m30.898s 00:09:09.892 sys 0m6.801s 00:09:09.892 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.892 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:09.892 ************************************ 00:09:09.892 END TEST nvmf_delete_subsystem 00:09:09.892 ************************************ 00:09:09.892 17:36:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:09.892 17:36:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:09.892 17:36:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.892 17:36:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.892 ************************************ 00:09:09.892 START TEST nvmf_host_management 00:09:09.892 ************************************ 00:09:09.892 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:09.892 * Looking for test storage... 00:09:09.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.892 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:09.892 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:09:09.892 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:09.892 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:09.892 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:09.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.893 --rc genhtml_branch_coverage=1 00:09:09.893 --rc genhtml_function_coverage=1 00:09:09.893 --rc genhtml_legend=1 00:09:09.893 --rc geninfo_all_blocks=1 00:09:09.893 --rc geninfo_unexecuted_blocks=1 00:09:09.893 00:09:09.893 ' 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:09.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.893 --rc genhtml_branch_coverage=1 00:09:09.893 --rc genhtml_function_coverage=1 00:09:09.893 --rc genhtml_legend=1 00:09:09.893 --rc geninfo_all_blocks=1 00:09:09.893 --rc geninfo_unexecuted_blocks=1 00:09:09.893 00:09:09.893 ' 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:09.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.893 --rc genhtml_branch_coverage=1 00:09:09.893 --rc genhtml_function_coverage=1 00:09:09.893 --rc genhtml_legend=1 00:09:09.893 --rc geninfo_all_blocks=1 00:09:09.893 --rc geninfo_unexecuted_blocks=1 00:09:09.893 00:09:09.893 ' 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:09.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.893 --rc genhtml_branch_coverage=1 00:09:09.893 --rc genhtml_function_coverage=1 00:09:09.893 --rc genhtml_legend=1 00:09:09.893 --rc geninfo_all_blocks=1 00:09:09.893 --rc geninfo_unexecuted_blocks=1 00:09:09.893 00:09:09.893 ' 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.893 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:09.894 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:09.894 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:09.894 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.894 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.894 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.894 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:09.894 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:09.894 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:09:09.894 17:36:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:18.045 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:18.045 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:18.045 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:18.045 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.045 17:36:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.045 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:18.045 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.045 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.045 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.045 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:18.045 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:18.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:09:18.045 00:09:18.045 --- 10.0.0.2 ping statistics --- 00:09:18.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.045 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:09:18.045 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:09:18.046 00:09:18.046 --- 10.0.0.1 ping statistics --- 00:09:18.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.046 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=2463839 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 2463839 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2463839 ']' 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:18.046 17:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:18.046 [2024-11-20 17:36:17.255614] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:18.046 [2024-11-20 17:36:17.255677] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.046 [2024-11-20 17:36:17.343695] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:18.046 [2024-11-20 17:36:17.393823] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.046 [2024-11-20 17:36:17.393878] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.046 [2024-11-20 17:36:17.393887] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.046 [2024-11-20 17:36:17.393899] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.046 [2024-11-20 17:36:17.393904] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.046 [2024-11-20 17:36:17.394059] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.046 [2024-11-20 17:36:17.394227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:18.046 [2024-11-20 17:36:17.394387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:18.046 [2024-11-20 17:36:17.394388] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:18.308 [2024-11-20 17:36:18.133193] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:18.308 Malloc0 00:09:18.308 [2024-11-20 17:36:18.202457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:18.308 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2463928 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2463928 /var/tmp/bdevperf.sock 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2463928 ']' 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:18.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:18.570 { 00:09:18.570 "params": { 00:09:18.570 "name": "Nvme$subsystem", 00:09:18.570 "trtype": "$TEST_TRANSPORT", 00:09:18.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:18.570 "adrfam": "ipv4", 00:09:18.570 "trsvcid": "$NVMF_PORT", 00:09:18.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:18.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:18.570 "hdgst": ${hdgst:-false}, 00:09:18.570 "ddgst": ${ddgst:-false} 00:09:18.570 }, 00:09:18.570 "method": "bdev_nvme_attach_controller" 00:09:18.570 } 00:09:18.570 EOF 00:09:18.570 )") 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:09:18.570 17:36:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:18.570 "params": { 00:09:18.570 "name": "Nvme0", 00:09:18.570 "trtype": "tcp", 00:09:18.570 "traddr": "10.0.0.2", 00:09:18.570 "adrfam": "ipv4", 00:09:18.570 "trsvcid": "4420", 00:09:18.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:18.570 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:18.570 "hdgst": false, 00:09:18.570 "ddgst": false 00:09:18.570 }, 00:09:18.570 "method": "bdev_nvme_attach_controller" 00:09:18.570 }' 00:09:18.570 [2024-11-20 17:36:18.312712] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:18.570 [2024-11-20 17:36:18.312783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2463928 ] 00:09:18.570 [2024-11-20 17:36:18.395114] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.570 [2024-11-20 17:36:18.442956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.144 Running I/O for 10 seconds... 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=649 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 649 -ge 100 ']' 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.471 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:19.471 [2024-11-20 17:36:19.214289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.471 [2024-11-20 17:36:19.214553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.214710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf2c0 is same with the state(6) to be set 00:09:19.472 [2024-11-20 17:36:19.218661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:19.472 [2024-11-20 17:36:19.218723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.218740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:19.472 [2024-11-20 17:36:19.218751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.218764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:19.472 [2024-11-20 17:36:19.218775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.218787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:19.472 [2024-11-20 17:36:19.218798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.218811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca36b0 is same with the state(6) to be set 00:09:19.472 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.472 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:19.472 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.472 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:19.472 [2024-11-20 17:36:19.221882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.221924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.221952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.221964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.221979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.221991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.222973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.222986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.223001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.223013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.223029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.223041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.223056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.223069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.223083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.223096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.223115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.223128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.223144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.472 [2024-11-20 17:36:19.223157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.472 [2024-11-20 17:36:19.223178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.473 [2024-11-20 17:36:19.223772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:19.473 [2024-11-20 17:36:19.223879] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc9f960 was disconnected and freed. reset controller. 00:09:19.473 [2024-11-20 17:36:19.225629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:19.473 task offset: 93824 on job bdev=Nvme0n1 fails 00:09:19.473 00:09:19.473 Latency(us) 00:09:19.473 [2024-11-20T16:36:19.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.473 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:19.473 Job: Nvme0n1 ended in about 0.46 seconds with error 00:09:19.473 Verification LBA range: start 0x0 length 0x400 00:09:19.473 Nvme0n1 : 0.46 1576.57 98.54 137.65 0.00 36238.55 2539.52 34515.63 00:09:19.473 [2024-11-20T16:36:19.389Z] =================================================================================================================== 00:09:19.473 [2024-11-20T16:36:19.389Z] Total : 1576.57 98.54 137.65 0.00 36238.55 2539.52 34515.63 00:09:19.473 [2024-11-20 17:36:19.227957] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:19.473 [2024-11-20 17:36:19.228002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca36b0 (9): Bad file descriptor 00:09:19.473 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.473 17:36:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:19.473 [2024-11-20 17:36:19.248905] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:20.458 17:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2463928 00:09:20.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2463928) - No such process 00:09:20.458 17:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:20.458 17:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:20.458 17:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:20.458 17:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:20.458 17:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:09:20.458 17:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:09:20.458 17:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:20.458 17:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:20.458 { 00:09:20.458 "params": { 00:09:20.458 "name": "Nvme$subsystem", 00:09:20.458 "trtype": "$TEST_TRANSPORT", 00:09:20.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:20.458 "adrfam": "ipv4", 00:09:20.458 "trsvcid": "$NVMF_PORT", 00:09:20.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:20.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:20.458 "hdgst": ${hdgst:-false}, 00:09:20.458 "ddgst": ${ddgst:-false} 00:09:20.458 }, 00:09:20.458 "method": "bdev_nvme_attach_controller" 00:09:20.458 } 00:09:20.458 EOF 00:09:20.458 )") 00:09:20.458 17:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:09:20.458 17:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:09:20.458 17:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:09:20.458 17:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:20.458 "params": { 00:09:20.458 "name": "Nvme0", 00:09:20.458 "trtype": "tcp", 00:09:20.458 "traddr": "10.0.0.2", 00:09:20.458 "adrfam": "ipv4", 00:09:20.458 "trsvcid": "4420", 00:09:20.458 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:20.458 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:20.458 "hdgst": false, 00:09:20.458 "ddgst": false 00:09:20.458 }, 00:09:20.458 "method": "bdev_nvme_attach_controller" 00:09:20.458 }' 00:09:20.458 [2024-11-20 17:36:20.289703] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:20.458 [2024-11-20 17:36:20.289759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2464424 ] 00:09:20.458 [2024-11-20 17:36:20.364546] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.719 [2024-11-20 17:36:20.395311] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.979 Running I/O for 1 seconds... 00:09:21.921 1674.00 IOPS, 104.62 MiB/s 00:09:21.921 Latency(us) 00:09:21.921 [2024-11-20T16:36:21.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.921 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:21.921 Verification LBA range: start 0x0 length 0x400 00:09:21.921 Nvme0n1 : 1.00 1730.59 108.16 0.00 0.00 36315.63 3386.03 34952.53 00:09:21.921 [2024-11-20T16:36:21.837Z] =================================================================================================================== 00:09:21.921 [2024-11-20T16:36:21.837Z] Total : 1730.59 108.16 0.00 0.00 36315.63 3386.03 34952.53 00:09:21.921 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:21.921 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:21.921 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:21.921 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:21.921 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:21.921 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:21.921 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:22.182 rmmod nvme_tcp 00:09:22.182 rmmod nvme_fabrics 00:09:22.182 rmmod nvme_keyring 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 2463839 ']' 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 2463839 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2463839 ']' 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2463839 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2463839 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2463839' 00:09:22.182 killing process with pid 2463839 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2463839 00:09:22.182 17:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2463839 00:09:22.182 [2024-11-20 17:36:22.067659] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:22.182 17:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:22.182 17:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:22.182 17:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:22.182 17:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:22.182 17:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:09:22.182 17:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:22.182 17:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:09:22.442 17:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:22.442 17:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:22.442 17:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.442 17:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.442 17:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.358 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:24.358 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:24.358 00:09:24.358 real 0m14.816s 00:09:24.358 user 0m23.502s 00:09:24.358 sys 0m6.876s 00:09:24.358 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.358 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:24.358 ************************************ 00:09:24.358 END TEST nvmf_host_management 00:09:24.358 ************************************ 00:09:24.358 17:36:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:24.358 17:36:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:24.358 17:36:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.358 17:36:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:24.358 ************************************ 00:09:24.358 START TEST nvmf_lvol 00:09:24.358 ************************************ 00:09:24.358 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:24.619 * Looking for test storage... 00:09:24.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:24.619 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:24.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.620 --rc genhtml_branch_coverage=1 00:09:24.620 --rc genhtml_function_coverage=1 00:09:24.620 --rc genhtml_legend=1 00:09:24.620 --rc geninfo_all_blocks=1 00:09:24.620 --rc geninfo_unexecuted_blocks=1 00:09:24.620 00:09:24.620 ' 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:24.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.620 --rc genhtml_branch_coverage=1 00:09:24.620 --rc genhtml_function_coverage=1 00:09:24.620 --rc genhtml_legend=1 00:09:24.620 --rc geninfo_all_blocks=1 00:09:24.620 --rc geninfo_unexecuted_blocks=1 00:09:24.620 00:09:24.620 ' 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:24.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.620 --rc genhtml_branch_coverage=1 00:09:24.620 --rc genhtml_function_coverage=1 00:09:24.620 --rc genhtml_legend=1 00:09:24.620 --rc geninfo_all_blocks=1 00:09:24.620 --rc geninfo_unexecuted_blocks=1 00:09:24.620 00:09:24.620 ' 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:24.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.620 --rc genhtml_branch_coverage=1 00:09:24.620 --rc genhtml_function_coverage=1 00:09:24.620 --rc genhtml_legend=1 00:09:24.620 --rc geninfo_all_blocks=1 00:09:24.620 --rc geninfo_unexecuted_blocks=1 00:09:24.620 00:09:24.620 ' 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:24.620 17:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:32.766 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:32.766 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:32.766 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:32.766 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:32.766 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:32.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:09:32.767 00:09:32.767 --- 10.0.0.2 ping statistics --- 00:09:32.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.767 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:09:32.767 00:09:32.767 --- 10.0.0.1 ping statistics --- 00:09:32.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.767 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:32.767 17:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:32.767 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:32.767 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:32.767 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:32.767 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:32.767 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=2468946 00:09:32.767 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 2468946 00:09:32.767 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:32.767 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2468946 ']' 00:09:32.767 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.767 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:32.767 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.767 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:32.767 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:32.767 [2024-11-20 17:36:32.074484] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:32.767 [2024-11-20 17:36:32.074550] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.767 [2024-11-20 17:36:32.161939] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:32.767 [2024-11-20 17:36:32.210149] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.767 [2024-11-20 17:36:32.210210] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.767 [2024-11-20 17:36:32.210222] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.767 [2024-11-20 17:36:32.210238] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.767 [2024-11-20 17:36:32.210246] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.767 [2024-11-20 17:36:32.210342] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.767 [2024-11-20 17:36:32.210501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.767 [2024-11-20 17:36:32.210501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.029 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:33.029 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:33.029 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:33.029 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:33.029 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:33.289 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.289 17:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:33.289 [2024-11-20 17:36:33.110867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.289 17:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.550 17:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:33.550 17:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.811 17:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:33.811 17:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:34.073 17:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:34.334 17:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=579de1b4-e317-4bd2-9bcb-e78db4d1861c 00:09:34.334 17:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 579de1b4-e317-4bd2-9bcb-e78db4d1861c lvol 20 00:09:34.334 17:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=78f85dd7-2aed-4236-bc8d-3f324b1c5487 00:09:34.334 17:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:34.594 17:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 78f85dd7-2aed-4236-bc8d-3f324b1c5487 00:09:34.855 17:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:34.855 [2024-11-20 17:36:34.743747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.116 17:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:35.116 17:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2469643 00:09:35.116 17:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:35.116 17:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:36.500 17:36:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 78f85dd7-2aed-4236-bc8d-3f324b1c5487 MY_SNAPSHOT 00:09:36.500 17:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b03d4375-6be0-4466-a24d-e41dc599e914 00:09:36.500 17:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 78f85dd7-2aed-4236-bc8d-3f324b1c5487 30 00:09:36.500 17:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b03d4375-6be0-4466-a24d-e41dc599e914 MY_CLONE 00:09:36.760 17:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=73922eb6-78ae-4898-ad42-b5b131defe91 00:09:36.760 17:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 73922eb6-78ae-4898-ad42-b5b131defe91 00:09:37.021 17:36:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2469643 00:09:47.017 Initializing NVMe Controllers 00:09:47.017 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:47.017 Controller IO queue size 128, less than required. 00:09:47.017 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:47.017 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:47.017 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:47.017 Initialization complete. Launching workers. 00:09:47.017 ======================================================== 00:09:47.017 Latency(us) 00:09:47.017 Device Information : IOPS MiB/s Average min max 00:09:47.017 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16638.60 64.99 7694.68 1586.48 43378.39 00:09:47.017 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17382.50 67.90 7364.42 1167.26 57610.56 00:09:47.017 ======================================================== 00:09:47.017 Total : 34021.10 132.89 7525.94 1167.26 57610.56 00:09:47.017 00:09:47.017 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:47.017 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 78f85dd7-2aed-4236-bc8d-3f324b1c5487 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 579de1b4-e317-4bd2-9bcb-e78db4d1861c 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.018 rmmod nvme_tcp 00:09:47.018 rmmod nvme_fabrics 00:09:47.018 rmmod nvme_keyring 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 2468946 ']' 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 2468946 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2468946 ']' 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2468946 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.018 17:36:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2468946 00:09:47.018 17:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:47.018 17:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:47.018 17:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2468946' 00:09:47.018 killing process with pid 2468946 00:09:47.018 17:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2468946 00:09:47.018 17:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2468946 00:09:47.018 17:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:47.018 17:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:47.018 17:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:47.018 17:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:47.018 17:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:09:47.018 17:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:47.018 17:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:09:47.018 17:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.018 17:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:47.018 17:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.018 17:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.018 17:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.402 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:48.402 00:09:48.402 real 0m24.017s 00:09:48.403 user 1m5.138s 00:09:48.403 sys 0m8.501s 00:09:48.403 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.403 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 ************************************ 00:09:48.403 END TEST nvmf_lvol 00:09:48.403 ************************************ 00:09:48.403 17:36:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:48.403 17:36:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:48.403 17:36:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.403 17:36:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:48.664 ************************************ 00:09:48.665 START TEST nvmf_lvs_grow 00:09:48.665 ************************************ 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:48.665 * Looking for test storage... 00:09:48.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:48.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.665 --rc genhtml_branch_coverage=1 00:09:48.665 --rc genhtml_function_coverage=1 00:09:48.665 --rc genhtml_legend=1 00:09:48.665 --rc geninfo_all_blocks=1 00:09:48.665 --rc geninfo_unexecuted_blocks=1 00:09:48.665 00:09:48.665 ' 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:48.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.665 --rc genhtml_branch_coverage=1 00:09:48.665 --rc genhtml_function_coverage=1 00:09:48.665 --rc genhtml_legend=1 00:09:48.665 --rc geninfo_all_blocks=1 00:09:48.665 --rc geninfo_unexecuted_blocks=1 00:09:48.665 00:09:48.665 ' 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:48.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.665 --rc genhtml_branch_coverage=1 00:09:48.665 --rc genhtml_function_coverage=1 00:09:48.665 --rc genhtml_legend=1 00:09:48.665 --rc geninfo_all_blocks=1 00:09:48.665 --rc geninfo_unexecuted_blocks=1 00:09:48.665 00:09:48.665 ' 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:48.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.665 --rc genhtml_branch_coverage=1 00:09:48.665 --rc genhtml_function_coverage=1 00:09:48.665 --rc genhtml_legend=1 00:09:48.665 --rc geninfo_all_blocks=1 00:09:48.665 --rc geninfo_unexecuted_blocks=1 00:09:48.665 00:09:48.665 ' 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:48.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:48.665 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:48.666 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:48.666 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:48.666 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.666 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:48.666 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:48.666 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:48.666 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.666 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:48.666 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:48.666 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:48.666 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.666 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.666 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.927 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:48.928 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:48.928 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:48.928 17:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.076 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:57.077 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:57.077 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:57.077 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:57.077 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:57.077 17:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:57.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:09:57.077 00:09:57.077 --- 10.0.0.2 ping statistics --- 00:09:57.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.077 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:09:57.077 00:09:57.077 --- 10.0.0.1 ping statistics --- 00:09:57.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.077 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=2476021 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 2476021 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2476021 ']' 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.077 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:57.077 [2024-11-20 17:36:56.162228] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:57.077 [2024-11-20 17:36:56.162297] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.078 [2024-11-20 17:36:56.251364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.078 [2024-11-20 17:36:56.296978] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.078 [2024-11-20 17:36:56.297033] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.078 [2024-11-20 17:36:56.297045] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.078 [2024-11-20 17:36:56.297055] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.078 [2024-11-20 17:36:56.297063] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.078 [2024-11-20 17:36:56.297098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.078 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.078 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:57.078 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:57.078 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:57.078 17:36:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:57.339 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.339 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:57.339 [2024-11-20 17:36:57.197185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.339 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:57.339 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:57.339 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.339 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:57.601 ************************************ 00:09:57.601 START TEST lvs_grow_clean 00:09:57.601 ************************************ 00:09:57.601 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:57.601 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:57.601 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:57.601 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:57.601 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:57.601 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:57.601 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:57.601 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:57.601 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:57.601 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:57.601 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:57.601 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:57.862 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5951cbd4-04e6-4e8a-9ee6-922319561493 00:09:57.862 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5951cbd4-04e6-4e8a-9ee6-922319561493 00:09:57.862 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:58.123 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:58.123 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:58.124 17:36:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5951cbd4-04e6-4e8a-9ee6-922319561493 lvol 150 00:09:58.385 17:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a9f45b3a-63f6-4e8b-8aef-e6d4e7be4b17 00:09:58.385 17:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:58.385 17:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:58.385 [2024-11-20 17:36:58.213099] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:58.385 [2024-11-20 17:36:58.213190] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:58.385 true 00:09:58.385 17:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5951cbd4-04e6-4e8a-9ee6-922319561493 00:09:58.385 17:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:58.647 17:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:58.647 17:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:58.908 17:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a9f45b3a-63f6-4e8b-8aef-e6d4e7be4b17 00:09:58.908 17:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:59.169 [2024-11-20 17:36:58.927382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.169 17:36:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:59.432 17:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2476723 00:09:59.432 17:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:59.432 17:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:59.432 17:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2476723 /var/tmp/bdevperf.sock 00:09:59.432 17:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2476723 ']' 00:09:59.432 17:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:59.432 17:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.432 17:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:59.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:59.432 17:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.432 17:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:59.432 [2024-11-20 17:36:59.159701] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:59.432 [2024-11-20 17:36:59.159768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2476723 ] 00:09:59.432 [2024-11-20 17:36:59.240310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.432 [2024-11-20 17:36:59.286865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.376 17:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.376 17:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:10:00.376 17:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:00.376 Nvme0n1 00:10:00.376 17:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:00.637 [ 00:10:00.637 { 00:10:00.637 "name": "Nvme0n1", 00:10:00.637 "aliases": [ 00:10:00.637 "a9f45b3a-63f6-4e8b-8aef-e6d4e7be4b17" 00:10:00.637 ], 00:10:00.637 "product_name": "NVMe disk", 00:10:00.637 "block_size": 4096, 00:10:00.637 "num_blocks": 38912, 00:10:00.637 "uuid": "a9f45b3a-63f6-4e8b-8aef-e6d4e7be4b17", 00:10:00.637 "numa_id": 0, 00:10:00.637 "assigned_rate_limits": { 00:10:00.637 "rw_ios_per_sec": 0, 00:10:00.637 "rw_mbytes_per_sec": 0, 00:10:00.637 "r_mbytes_per_sec": 0, 00:10:00.637 "w_mbytes_per_sec": 0 00:10:00.637 }, 00:10:00.637 "claimed": false, 00:10:00.637 "zoned": false, 00:10:00.637 "supported_io_types": { 00:10:00.637 "read": true, 00:10:00.637 "write": true, 00:10:00.637 "unmap": true, 00:10:00.637 "flush": true, 00:10:00.637 "reset": true, 00:10:00.637 "nvme_admin": true, 00:10:00.637 "nvme_io": true, 00:10:00.637 "nvme_io_md": false, 00:10:00.637 "write_zeroes": true, 00:10:00.637 "zcopy": false, 00:10:00.637 "get_zone_info": false, 00:10:00.637 "zone_management": false, 00:10:00.637 "zone_append": false, 00:10:00.637 "compare": true, 00:10:00.637 "compare_and_write": true, 00:10:00.637 "abort": true, 00:10:00.637 "seek_hole": false, 00:10:00.637 "seek_data": false, 00:10:00.637 "copy": true, 00:10:00.637 "nvme_iov_md": false 00:10:00.637 }, 00:10:00.637 "memory_domains": [ 00:10:00.637 { 00:10:00.637 "dma_device_id": "system", 00:10:00.637 "dma_device_type": 1 00:10:00.637 } 00:10:00.637 ], 00:10:00.637 "driver_specific": { 00:10:00.637 "nvme": [ 00:10:00.637 { 00:10:00.637 "trid": { 00:10:00.637 "trtype": "TCP", 00:10:00.637 "adrfam": "IPv4", 00:10:00.637 "traddr": "10.0.0.2", 00:10:00.637 "trsvcid": "4420", 00:10:00.637 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:00.637 }, 00:10:00.637 "ctrlr_data": { 00:10:00.637 "cntlid": 1, 00:10:00.637 "vendor_id": "0x8086", 00:10:00.637 "model_number": "SPDK bdev Controller", 00:10:00.637 "serial_number": "SPDK0", 00:10:00.637 "firmware_revision": "24.09.1", 00:10:00.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:00.637 "oacs": { 00:10:00.637 "security": 0, 00:10:00.637 "format": 0, 00:10:00.637 "firmware": 0, 00:10:00.637 "ns_manage": 0 00:10:00.638 }, 00:10:00.638 "multi_ctrlr": true, 00:10:00.638 "ana_reporting": false 00:10:00.638 }, 00:10:00.638 "vs": { 00:10:00.638 "nvme_version": "1.3" 00:10:00.638 }, 00:10:00.638 "ns_data": { 00:10:00.638 "id": 1, 00:10:00.638 "can_share": true 00:10:00.638 } 00:10:00.638 } 00:10:00.638 ], 00:10:00.638 "mp_policy": "active_passive" 00:10:00.638 } 00:10:00.638 } 00:10:00.638 ] 00:10:00.638 17:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2477059 00:10:00.638 17:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:00.638 17:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:00.638 Running I/O for 10 seconds... 00:10:02.026 Latency(us) 00:10:02.026 [2024-11-20T16:37:01.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:02.026 Nvme0n1 : 1.00 24913.00 97.32 0.00 0.00 0.00 0.00 0.00 00:10:02.026 [2024-11-20T16:37:01.942Z] =================================================================================================================== 00:10:02.026 [2024-11-20T16:37:01.942Z] Total : 24913.00 97.32 0.00 0.00 0.00 0.00 0.00 00:10:02.026 00:10:02.597 17:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5951cbd4-04e6-4e8a-9ee6-922319561493 00:10:02.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:02.858 Nvme0n1 : 2.00 25113.00 98.10 0.00 0.00 0.00 0.00 0.00 00:10:02.858 [2024-11-20T16:37:02.774Z] =================================================================================================================== 00:10:02.858 [2024-11-20T16:37:02.774Z] Total : 25113.00 98.10 0.00 0.00 0.00 0.00 0.00 00:10:02.858 00:10:02.858 true 00:10:02.858 17:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5951cbd4-04e6-4e8a-9ee6-922319561493 00:10:02.858 17:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:03.119 17:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:03.119 17:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:03.119 17:37:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2477059 00:10:03.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.691 Nvme0n1 : 3.00 25202.00 98.45 0.00 0.00 0.00 0.00 0.00 00:10:03.691 [2024-11-20T16:37:03.607Z] =================================================================================================================== 00:10:03.691 [2024-11-20T16:37:03.607Z] Total : 25202.00 98.45 0.00 0.00 0.00 0.00 0.00 00:10:03.691 00:10:05.077 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.078 Nvme0n1 : 4.00 25275.25 98.73 0.00 0.00 0.00 0.00 0.00 00:10:05.078 [2024-11-20T16:37:04.994Z] =================================================================================================================== 00:10:05.078 [2024-11-20T16:37:04.994Z] Total : 25275.25 98.73 0.00 0.00 0.00 0.00 0.00 00:10:05.078 00:10:06.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:06.020 Nvme0n1 : 5.00 25327.40 98.94 0.00 0.00 0.00 0.00 0.00 00:10:06.020 [2024-11-20T16:37:05.936Z] =================================================================================================================== 00:10:06.020 [2024-11-20T16:37:05.936Z] Total : 25327.40 98.94 0.00 0.00 0.00 0.00 0.00 00:10:06.020 00:10:06.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:06.964 Nvme0n1 : 6.00 25362.00 99.07 0.00 0.00 0.00 0.00 0.00 00:10:06.964 [2024-11-20T16:37:06.880Z] =================================================================================================================== 00:10:06.964 [2024-11-20T16:37:06.880Z] Total : 25362.00 99.07 0.00 0.00 0.00 0.00 0.00 00:10:06.964 00:10:07.906 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.906 Nvme0n1 : 7.00 25386.71 99.17 0.00 0.00 0.00 0.00 0.00 00:10:07.906 [2024-11-20T16:37:07.822Z] =================================================================================================================== 00:10:07.906 [2024-11-20T16:37:07.822Z] Total : 25386.71 99.17 0.00 0.00 0.00 0.00 0.00 00:10:07.906 00:10:08.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.849 Nvme0n1 : 8.00 25407.38 99.25 0.00 0.00 0.00 0.00 0.00 00:10:08.849 [2024-11-20T16:37:08.765Z] =================================================================================================================== 00:10:08.849 [2024-11-20T16:37:08.765Z] Total : 25407.38 99.25 0.00 0.00 0.00 0.00 0.00 00:10:08.849 00:10:09.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.791 Nvme0n1 : 9.00 25427.00 99.32 0.00 0.00 0.00 0.00 0.00 00:10:09.791 [2024-11-20T16:37:09.707Z] =================================================================================================================== 00:10:09.791 [2024-11-20T16:37:09.707Z] Total : 25427.00 99.32 0.00 0.00 0.00 0.00 0.00 00:10:09.791 00:10:10.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.734 Nvme0n1 : 10.00 25437.60 99.37 0.00 0.00 0.00 0.00 0.00 00:10:10.734 [2024-11-20T16:37:10.650Z] =================================================================================================================== 00:10:10.734 [2024-11-20T16:37:10.650Z] Total : 25437.60 99.37 0.00 0.00 0.00 0.00 0.00 00:10:10.734 00:10:10.734 00:10:10.734 Latency(us) 00:10:10.734 [2024-11-20T16:37:10.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.734 Nvme0n1 : 10.00 25434.55 99.35 0.00 0.00 5028.85 2484.91 16602.45 00:10:10.734 [2024-11-20T16:37:10.650Z] =================================================================================================================== 00:10:10.734 [2024-11-20T16:37:10.650Z] Total : 25434.55 99.35 0.00 0.00 5028.85 2484.91 16602.45 00:10:10.734 { 00:10:10.734 "results": [ 00:10:10.734 { 00:10:10.734 "job": "Nvme0n1", 00:10:10.734 "core_mask": "0x2", 00:10:10.734 "workload": "randwrite", 00:10:10.734 "status": "finished", 00:10:10.734 "queue_depth": 128, 00:10:10.734 "io_size": 4096, 00:10:10.734 "runtime": 10.003756, 00:10:10.734 "iops": 25434.546784227845, 00:10:10.734 "mibps": 99.35369837589002, 00:10:10.734 "io_failed": 0, 00:10:10.734 "io_timeout": 0, 00:10:10.734 "avg_latency_us": 5028.848125996466, 00:10:10.734 "min_latency_us": 2484.9066666666668, 00:10:10.734 "max_latency_us": 16602.453333333335 00:10:10.734 } 00:10:10.734 ], 00:10:10.734 "core_count": 1 00:10:10.734 } 00:10:10.734 17:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2476723 00:10:10.734 17:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2476723 ']' 00:10:10.734 17:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2476723 00:10:10.734 17:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:10:10.734 17:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.734 17:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2476723 00:10:10.996 17:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:10.996 17:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:10.996 17:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2476723' 00:10:10.996 killing process with pid 2476723 00:10:10.996 17:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2476723 00:10:10.996 Received shutdown signal, test time was about 10.000000 seconds 00:10:10.996 00:10:10.996 Latency(us) 00:10:10.996 [2024-11-20T16:37:10.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.996 [2024-11-20T16:37:10.912Z] =================================================================================================================== 00:10:10.996 [2024-11-20T16:37:10.912Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:10.996 17:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2476723 00:10:10.996 17:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:11.258 17:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:11.258 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5951cbd4-04e6-4e8a-9ee6-922319561493 00:10:11.258 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:11.520 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:11.520 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:11.520 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:11.781 [2024-11-20 17:37:11.506444] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:11.781 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5951cbd4-04e6-4e8a-9ee6-922319561493 00:10:11.781 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:11.781 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5951cbd4-04e6-4e8a-9ee6-922319561493 00:10:11.781 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.781 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.781 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.781 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.781 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.781 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.781 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.781 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:11.781 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5951cbd4-04e6-4e8a-9ee6-922319561493 00:10:12.044 request: 00:10:12.044 { 00:10:12.044 "uuid": "5951cbd4-04e6-4e8a-9ee6-922319561493", 00:10:12.044 "method": "bdev_lvol_get_lvstores", 00:10:12.044 "req_id": 1 00:10:12.044 } 00:10:12.044 Got JSON-RPC error response 00:10:12.044 response: 00:10:12.044 { 00:10:12.044 "code": -19, 00:10:12.044 "message": "No such device" 00:10:12.044 } 00:10:12.044 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:12.044 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:12.044 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:12.044 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:12.044 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:12.044 aio_bdev 00:10:12.044 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a9f45b3a-63f6-4e8b-8aef-e6d4e7be4b17 00:10:12.044 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=a9f45b3a-63f6-4e8b-8aef-e6d4e7be4b17 00:10:12.044 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:12.044 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:10:12.044 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:12.044 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:12.044 17:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:12.305 17:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a9f45b3a-63f6-4e8b-8aef-e6d4e7be4b17 -t 2000 00:10:12.305 [ 00:10:12.305 { 00:10:12.305 "name": "a9f45b3a-63f6-4e8b-8aef-e6d4e7be4b17", 00:10:12.305 "aliases": [ 00:10:12.305 "lvs/lvol" 00:10:12.305 ], 00:10:12.305 "product_name": "Logical Volume", 00:10:12.305 "block_size": 4096, 00:10:12.305 "num_blocks": 38912, 00:10:12.305 "uuid": "a9f45b3a-63f6-4e8b-8aef-e6d4e7be4b17", 00:10:12.305 "assigned_rate_limits": { 00:10:12.305 "rw_ios_per_sec": 0, 00:10:12.305 "rw_mbytes_per_sec": 0, 00:10:12.305 "r_mbytes_per_sec": 0, 00:10:12.305 "w_mbytes_per_sec": 0 00:10:12.305 }, 00:10:12.305 "claimed": false, 00:10:12.305 "zoned": false, 00:10:12.305 "supported_io_types": { 00:10:12.305 "read": true, 00:10:12.305 "write": true, 00:10:12.305 "unmap": true, 00:10:12.305 "flush": false, 00:10:12.305 "reset": true, 00:10:12.305 "nvme_admin": false, 00:10:12.305 "nvme_io": false, 00:10:12.305 "nvme_io_md": false, 00:10:12.305 "write_zeroes": true, 00:10:12.305 "zcopy": false, 00:10:12.305 "get_zone_info": false, 00:10:12.305 "zone_management": false, 00:10:12.305 "zone_append": false, 00:10:12.305 "compare": false, 00:10:12.305 "compare_and_write": false, 00:10:12.305 "abort": false, 00:10:12.305 "seek_hole": true, 00:10:12.305 "seek_data": true, 00:10:12.305 "copy": false, 00:10:12.305 "nvme_iov_md": false 00:10:12.305 }, 00:10:12.305 "driver_specific": { 00:10:12.305 "lvol": { 00:10:12.305 "lvol_store_uuid": "5951cbd4-04e6-4e8a-9ee6-922319561493", 00:10:12.305 "base_bdev": "aio_bdev", 00:10:12.305 "thin_provision": false, 00:10:12.305 "num_allocated_clusters": 38, 00:10:12.305 "snapshot": false, 00:10:12.305 "clone": false, 00:10:12.305 "esnap_clone": false 00:10:12.305 } 00:10:12.305 } 00:10:12.305 } 00:10:12.305 ] 00:10:12.566 17:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:10:12.566 17:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5951cbd4-04e6-4e8a-9ee6-922319561493 00:10:12.566 17:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:12.566 17:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:12.566 17:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5951cbd4-04e6-4e8a-9ee6-922319561493 00:10:12.566 17:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:12.827 17:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:12.827 17:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a9f45b3a-63f6-4e8b-8aef-e6d4e7be4b17 00:10:12.827 17:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5951cbd4-04e6-4e8a-9ee6-922319561493 00:10:13.088 17:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:13.349 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:13.349 00:10:13.349 real 0m15.821s 00:10:13.349 user 0m15.549s 00:10:13.349 sys 0m1.443s 00:10:13.349 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.349 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:13.349 ************************************ 00:10:13.349 END TEST lvs_grow_clean 00:10:13.349 ************************************ 00:10:13.349 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:13.349 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:13.349 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.349 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:13.349 ************************************ 00:10:13.349 START TEST lvs_grow_dirty 00:10:13.349 ************************************ 00:10:13.349 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:10:13.349 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:13.349 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:13.349 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:13.349 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:13.349 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:13.349 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:13.349 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:13.349 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:13.349 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:13.609 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:13.609 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:13.870 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bcd192bb-e823-44c7-8ccc-2e25f100c47f 00:10:13.870 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcd192bb-e823-44c7-8ccc-2e25f100c47f 00:10:13.870 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:13.870 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:13.870 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:13.870 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bcd192bb-e823-44c7-8ccc-2e25f100c47f lvol 150 00:10:14.130 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e6e61eaa-f919-4f1c-99c4-ae9f19bec45b 00:10:14.130 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:14.130 17:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:14.130 [2024-11-20 17:37:14.016285] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:14.130 [2024-11-20 17:37:14.016329] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:14.130 true 00:10:14.130 17:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:14.130 17:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcd192bb-e823-44c7-8ccc-2e25f100c47f 00:10:14.391 17:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:14.391 17:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:14.651 17:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e6e61eaa-f919-4f1c-99c4-ae9f19bec45b 00:10:14.651 17:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:14.911 [2024-11-20 17:37:14.674212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.911 17:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:15.172 17:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2479821 00:10:15.172 17:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:15.172 17:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:15.172 17:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2479821 /var/tmp/bdevperf.sock 00:10:15.172 17:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2479821 ']' 00:10:15.172 17:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:15.172 17:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.172 17:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:15.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:15.172 17:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.172 17:37:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:15.172 [2024-11-20 17:37:14.890606] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:15.172 [2024-11-20 17:37:14.890658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2479821 ] 00:10:15.172 [2024-11-20 17:37:14.966051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.172 [2024-11-20 17:37:14.994515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.121 17:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.121 17:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:16.121 17:37:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:16.381 Nvme0n1 00:10:16.381 17:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:16.381 [ 00:10:16.381 { 00:10:16.381 "name": "Nvme0n1", 00:10:16.381 "aliases": [ 00:10:16.381 "e6e61eaa-f919-4f1c-99c4-ae9f19bec45b" 00:10:16.381 ], 00:10:16.382 "product_name": "NVMe disk", 00:10:16.382 "block_size": 4096, 00:10:16.382 "num_blocks": 38912, 00:10:16.382 "uuid": "e6e61eaa-f919-4f1c-99c4-ae9f19bec45b", 00:10:16.382 "numa_id": 0, 00:10:16.382 "assigned_rate_limits": { 00:10:16.382 "rw_ios_per_sec": 0, 00:10:16.382 "rw_mbytes_per_sec": 0, 00:10:16.382 "r_mbytes_per_sec": 0, 00:10:16.382 "w_mbytes_per_sec": 0 00:10:16.382 }, 00:10:16.382 "claimed": false, 00:10:16.382 "zoned": false, 00:10:16.382 "supported_io_types": { 00:10:16.382 "read": true, 00:10:16.382 "write": true, 00:10:16.382 "unmap": true, 00:10:16.382 "flush": true, 00:10:16.382 "reset": true, 00:10:16.382 "nvme_admin": true, 00:10:16.382 "nvme_io": true, 00:10:16.382 "nvme_io_md": false, 00:10:16.382 "write_zeroes": true, 00:10:16.382 "zcopy": false, 00:10:16.382 "get_zone_info": false, 00:10:16.382 "zone_management": false, 00:10:16.382 "zone_append": false, 00:10:16.382 "compare": true, 00:10:16.382 "compare_and_write": true, 00:10:16.382 "abort": true, 00:10:16.382 "seek_hole": false, 00:10:16.382 "seek_data": false, 00:10:16.382 "copy": true, 00:10:16.382 "nvme_iov_md": false 00:10:16.382 }, 00:10:16.382 "memory_domains": [ 00:10:16.382 { 00:10:16.382 "dma_device_id": "system", 00:10:16.382 "dma_device_type": 1 00:10:16.382 } 00:10:16.382 ], 00:10:16.382 "driver_specific": { 00:10:16.382 "nvme": [ 00:10:16.382 { 00:10:16.382 "trid": { 00:10:16.382 "trtype": "TCP", 00:10:16.382 "adrfam": "IPv4", 00:10:16.382 "traddr": "10.0.0.2", 00:10:16.382 "trsvcid": "4420", 00:10:16.382 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:16.382 }, 00:10:16.382 "ctrlr_data": { 00:10:16.382 "cntlid": 1, 00:10:16.382 "vendor_id": "0x8086", 00:10:16.382 "model_number": "SPDK bdev Controller", 00:10:16.382 "serial_number": "SPDK0", 00:10:16.382 "firmware_revision": "24.09.1", 00:10:16.382 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:16.382 "oacs": { 00:10:16.382 "security": 0, 00:10:16.382 "format": 0, 00:10:16.382 "firmware": 0, 00:10:16.382 "ns_manage": 0 00:10:16.382 }, 00:10:16.382 "multi_ctrlr": true, 00:10:16.382 "ana_reporting": false 00:10:16.382 }, 00:10:16.382 "vs": { 00:10:16.382 "nvme_version": "1.3" 00:10:16.382 }, 00:10:16.382 "ns_data": { 00:10:16.382 "id": 1, 00:10:16.382 "can_share": true 00:10:16.382 } 00:10:16.382 } 00:10:16.382 ], 00:10:16.382 "mp_policy": "active_passive" 00:10:16.382 } 00:10:16.382 } 00:10:16.382 ] 00:10:16.642 17:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2480163 00:10:16.642 17:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:16.642 17:37:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:16.642 Running I/O for 10 seconds... 00:10:17.580 Latency(us) 00:10:17.580 [2024-11-20T16:37:17.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.580 Nvme0n1 : 1.00 25047.00 97.84 0.00 0.00 0.00 0.00 0.00 00:10:17.580 [2024-11-20T16:37:17.496Z] =================================================================================================================== 00:10:17.580 [2024-11-20T16:37:17.496Z] Total : 25047.00 97.84 0.00 0.00 0.00 0.00 0.00 00:10:17.580 00:10:18.521 17:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bcd192bb-e823-44c7-8ccc-2e25f100c47f 00:10:18.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.521 Nvme0n1 : 2.00 25195.00 98.42 0.00 0.00 0.00 0.00 0.00 00:10:18.521 [2024-11-20T16:37:18.437Z] =================================================================================================================== 00:10:18.521 [2024-11-20T16:37:18.437Z] Total : 25195.00 98.42 0.00 0.00 0.00 0.00 0.00 00:10:18.521 00:10:18.781 true 00:10:18.781 17:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcd192bb-e823-44c7-8ccc-2e25f100c47f 00:10:18.781 17:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:18.781 17:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:18.781 17:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:18.781 17:37:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2480163 00:10:19.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.723 Nvme0n1 : 3.00 25260.00 98.67 0.00 0.00 0.00 0.00 0.00 00:10:19.723 [2024-11-20T16:37:19.639Z] =================================================================================================================== 00:10:19.723 [2024-11-20T16:37:19.639Z] Total : 25260.00 98.67 0.00 0.00 0.00 0.00 0.00 00:10:19.723 00:10:20.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.665 Nvme0n1 : 4.00 25313.25 98.88 0.00 0.00 0.00 0.00 0.00 00:10:20.665 [2024-11-20T16:37:20.581Z] =================================================================================================================== 00:10:20.665 [2024-11-20T16:37:20.581Z] Total : 25313.25 98.88 0.00 0.00 0.00 0.00 0.00 00:10:20.665 00:10:21.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.612 Nvme0n1 : 5.00 25357.40 99.05 0.00 0.00 0.00 0.00 0.00 00:10:21.612 [2024-11-20T16:37:21.528Z] =================================================================================================================== 00:10:21.612 [2024-11-20T16:37:21.528Z] Total : 25357.40 99.05 0.00 0.00 0.00 0.00 0.00 00:10:21.612 00:10:22.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.668 Nvme0n1 : 6.00 25387.17 99.17 0.00 0.00 0.00 0.00 0.00 00:10:22.668 [2024-11-20T16:37:22.584Z] =================================================================================================================== 00:10:22.668 [2024-11-20T16:37:22.584Z] Total : 25387.17 99.17 0.00 0.00 0.00 0.00 0.00 00:10:22.668 00:10:23.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.611 Nvme0n1 : 7.00 25417.57 99.29 0.00 0.00 0.00 0.00 0.00 00:10:23.611 [2024-11-20T16:37:23.527Z] =================================================================================================================== 00:10:23.611 [2024-11-20T16:37:23.527Z] Total : 25417.57 99.29 0.00 0.00 0.00 0.00 0.00 00:10:23.611 00:10:24.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:24.553 Nvme0n1 : 8.00 25434.38 99.35 0.00 0.00 0.00 0.00 0.00 00:10:24.553 [2024-11-20T16:37:24.469Z] =================================================================================================================== 00:10:24.553 [2024-11-20T16:37:24.469Z] Total : 25434.38 99.35 0.00 0.00 0.00 0.00 0.00 00:10:24.553 00:10:25.939 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.939 Nvme0n1 : 9.00 25450.67 99.42 0.00 0.00 0.00 0.00 0.00 00:10:25.939 [2024-11-20T16:37:25.855Z] =================================================================================================================== 00:10:25.939 [2024-11-20T16:37:25.855Z] Total : 25450.67 99.42 0.00 0.00 0.00 0.00 0.00 00:10:25.939 00:10:26.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:26.511 Nvme0n1 : 10.00 25465.70 99.48 0.00 0.00 0.00 0.00 0.00 00:10:26.511 [2024-11-20T16:37:26.427Z] =================================================================================================================== 00:10:26.511 [2024-11-20T16:37:26.427Z] Total : 25465.70 99.48 0.00 0.00 0.00 0.00 0.00 00:10:26.511 00:10:26.774 00:10:26.774 Latency(us) 00:10:26.774 [2024-11-20T16:37:26.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:26.774 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:26.774 Nvme0n1 : 10.00 25467.92 99.48 0.00 0.00 5022.78 3099.31 9229.65 00:10:26.774 [2024-11-20T16:37:26.690Z] =================================================================================================================== 00:10:26.774 [2024-11-20T16:37:26.690Z] Total : 25467.92 99.48 0.00 0.00 5022.78 3099.31 9229.65 00:10:26.774 { 00:10:26.774 "results": [ 00:10:26.774 { 00:10:26.774 "job": "Nvme0n1", 00:10:26.774 "core_mask": "0x2", 00:10:26.774 "workload": "randwrite", 00:10:26.774 "status": "finished", 00:10:26.774 "queue_depth": 128, 00:10:26.774 "io_size": 4096, 00:10:26.774 "runtime": 10.004156, 00:10:26.774 "iops": 25467.915534303942, 00:10:26.774 "mibps": 99.48404505587477, 00:10:26.774 "io_failed": 0, 00:10:26.774 "io_timeout": 0, 00:10:26.774 "avg_latency_us": 5022.775898633488, 00:10:26.774 "min_latency_us": 3099.306666666667, 00:10:26.774 "max_latency_us": 9229.653333333334 00:10:26.774 } 00:10:26.774 ], 00:10:26.774 "core_count": 1 00:10:26.774 } 00:10:26.774 17:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2479821 00:10:26.774 17:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2479821 ']' 00:10:26.774 17:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2479821 00:10:26.774 17:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:10:26.774 17:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:26.774 17:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2479821 00:10:26.774 17:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:26.774 17:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:26.774 17:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2479821' 00:10:26.774 killing process with pid 2479821 00:10:26.774 17:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2479821 00:10:26.774 Received shutdown signal, test time was about 10.000000 seconds 00:10:26.774 00:10:26.774 Latency(us) 00:10:26.774 [2024-11-20T16:37:26.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:26.774 [2024-11-20T16:37:26.690Z] =================================================================================================================== 00:10:26.774 [2024-11-20T16:37:26.690Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:26.774 17:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2479821 00:10:26.774 17:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:27.035 17:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:27.296 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcd192bb-e823-44c7-8ccc-2e25f100c47f 00:10:27.296 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:27.296 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:27.296 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:27.296 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2476021 00:10:27.296 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2476021 00:10:27.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2476021 Killed "${NVMF_APP[@]}" "$@" 00:10:27.558 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:27.558 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:27.558 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:27.558 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:27.558 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:27.558 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=2482436 00:10:27.558 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 2482436 00:10:27.558 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:27.558 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2482436 ']' 00:10:27.558 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.558 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:27.558 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.558 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:27.558 17:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:27.558 [2024-11-20 17:37:27.309953] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:27.558 [2024-11-20 17:37:27.310009] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.558 [2024-11-20 17:37:27.394729] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.558 [2024-11-20 17:37:27.431237] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.558 [2024-11-20 17:37:27.431287] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.558 [2024-11-20 17:37:27.431296] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.558 [2024-11-20 17:37:27.431303] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.558 [2024-11-20 17:37:27.431310] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.558 [2024-11-20 17:37:27.431337] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.501 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:28.501 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:28.501 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:28.501 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:28.501 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:28.501 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:28.501 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:28.501 [2024-11-20 17:37:28.310578] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:28.501 [2024-11-20 17:37:28.310666] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:28.501 [2024-11-20 17:37:28.310694] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:28.501 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:28.501 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e6e61eaa-f919-4f1c-99c4-ae9f19bec45b 00:10:28.501 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e6e61eaa-f919-4f1c-99c4-ae9f19bec45b 00:10:28.501 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.501 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:28.501 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.501 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.501 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:28.762 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e6e61eaa-f919-4f1c-99c4-ae9f19bec45b -t 2000 00:10:28.762 [ 00:10:28.762 { 00:10:28.762 "name": "e6e61eaa-f919-4f1c-99c4-ae9f19bec45b", 00:10:28.762 "aliases": [ 00:10:28.762 "lvs/lvol" 00:10:28.762 ], 00:10:28.762 "product_name": "Logical Volume", 00:10:28.762 "block_size": 4096, 00:10:28.762 "num_blocks": 38912, 00:10:28.762 "uuid": "e6e61eaa-f919-4f1c-99c4-ae9f19bec45b", 00:10:28.762 "assigned_rate_limits": { 00:10:28.762 "rw_ios_per_sec": 0, 00:10:28.762 "rw_mbytes_per_sec": 0, 00:10:28.762 "r_mbytes_per_sec": 0, 00:10:28.762 "w_mbytes_per_sec": 0 00:10:28.762 }, 00:10:28.762 "claimed": false, 00:10:28.762 "zoned": false, 00:10:28.762 "supported_io_types": { 00:10:28.762 "read": true, 00:10:28.762 "write": true, 00:10:28.762 "unmap": true, 00:10:28.762 "flush": false, 00:10:28.762 "reset": true, 00:10:28.762 "nvme_admin": false, 00:10:28.762 "nvme_io": false, 00:10:28.762 "nvme_io_md": false, 00:10:28.762 "write_zeroes": true, 00:10:28.762 "zcopy": false, 00:10:28.762 "get_zone_info": false, 00:10:28.762 "zone_management": false, 00:10:28.762 "zone_append": false, 00:10:28.762 "compare": false, 00:10:28.762 "compare_and_write": false, 00:10:28.762 "abort": false, 00:10:28.762 "seek_hole": true, 00:10:28.762 "seek_data": true, 00:10:28.762 "copy": false, 00:10:28.762 "nvme_iov_md": false 00:10:28.762 }, 00:10:28.762 "driver_specific": { 00:10:28.762 "lvol": { 00:10:28.762 "lvol_store_uuid": "bcd192bb-e823-44c7-8ccc-2e25f100c47f", 00:10:28.762 "base_bdev": "aio_bdev", 00:10:28.762 "thin_provision": false, 00:10:28.762 "num_allocated_clusters": 38, 00:10:28.762 "snapshot": false, 00:10:28.762 "clone": false, 00:10:28.762 "esnap_clone": false 00:10:28.762 } 00:10:28.762 } 00:10:28.762 } 00:10:28.762 ] 00:10:29.025 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:29.025 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcd192bb-e823-44c7-8ccc-2e25f100c47f 00:10:29.025 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:29.025 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:29.025 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcd192bb-e823-44c7-8ccc-2e25f100c47f 00:10:29.025 17:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:29.286 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:29.286 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:29.286 [2024-11-20 17:37:29.183304] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:29.546 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcd192bb-e823-44c7-8ccc-2e25f100c47f 00:10:29.546 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:10:29.546 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcd192bb-e823-44c7-8ccc-2e25f100c47f 00:10:29.546 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:29.547 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:29.547 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:29.547 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:29.547 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:29.547 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:29.547 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:29.547 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:29.547 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcd192bb-e823-44c7-8ccc-2e25f100c47f 00:10:29.547 request: 00:10:29.547 { 00:10:29.547 "uuid": "bcd192bb-e823-44c7-8ccc-2e25f100c47f", 00:10:29.547 "method": "bdev_lvol_get_lvstores", 00:10:29.547 "req_id": 1 00:10:29.547 } 00:10:29.547 Got JSON-RPC error response 00:10:29.547 response: 00:10:29.547 { 00:10:29.547 "code": -19, 00:10:29.547 "message": "No such device" 00:10:29.547 } 00:10:29.547 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:10:29.547 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:29.547 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:29.547 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:29.547 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:29.807 aio_bdev 00:10:29.807 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e6e61eaa-f919-4f1c-99c4-ae9f19bec45b 00:10:29.807 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e6e61eaa-f919-4f1c-99c4-ae9f19bec45b 00:10:29.807 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.807 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:29.807 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.807 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.807 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:30.068 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e6e61eaa-f919-4f1c-99c4-ae9f19bec45b -t 2000 00:10:30.068 [ 00:10:30.068 { 00:10:30.068 "name": "e6e61eaa-f919-4f1c-99c4-ae9f19bec45b", 00:10:30.068 "aliases": [ 00:10:30.068 "lvs/lvol" 00:10:30.068 ], 00:10:30.068 "product_name": "Logical Volume", 00:10:30.068 "block_size": 4096, 00:10:30.068 "num_blocks": 38912, 00:10:30.068 "uuid": "e6e61eaa-f919-4f1c-99c4-ae9f19bec45b", 00:10:30.068 "assigned_rate_limits": { 00:10:30.068 "rw_ios_per_sec": 0, 00:10:30.068 "rw_mbytes_per_sec": 0, 00:10:30.068 "r_mbytes_per_sec": 0, 00:10:30.068 "w_mbytes_per_sec": 0 00:10:30.068 }, 00:10:30.068 "claimed": false, 00:10:30.068 "zoned": false, 00:10:30.068 "supported_io_types": { 00:10:30.068 "read": true, 00:10:30.068 "write": true, 00:10:30.068 "unmap": true, 00:10:30.068 "flush": false, 00:10:30.068 "reset": true, 00:10:30.068 "nvme_admin": false, 00:10:30.068 "nvme_io": false, 00:10:30.068 "nvme_io_md": false, 00:10:30.068 "write_zeroes": true, 00:10:30.068 "zcopy": false, 00:10:30.068 "get_zone_info": false, 00:10:30.068 "zone_management": false, 00:10:30.068 "zone_append": false, 00:10:30.068 "compare": false, 00:10:30.068 "compare_and_write": false, 00:10:30.068 "abort": false, 00:10:30.068 "seek_hole": true, 00:10:30.068 "seek_data": true, 00:10:30.068 "copy": false, 00:10:30.068 "nvme_iov_md": false 00:10:30.068 }, 00:10:30.068 "driver_specific": { 00:10:30.068 "lvol": { 00:10:30.068 "lvol_store_uuid": "bcd192bb-e823-44c7-8ccc-2e25f100c47f", 00:10:30.068 "base_bdev": "aio_bdev", 00:10:30.068 "thin_provision": false, 00:10:30.068 "num_allocated_clusters": 38, 00:10:30.068 "snapshot": false, 00:10:30.068 "clone": false, 00:10:30.068 "esnap_clone": false 00:10:30.068 } 00:10:30.068 } 00:10:30.068 } 00:10:30.068 ] 00:10:30.068 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:30.068 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcd192bb-e823-44c7-8ccc-2e25f100c47f 00:10:30.068 17:37:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:30.330 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:30.330 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bcd192bb-e823-44c7-8ccc-2e25f100c47f 00:10:30.330 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:30.591 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:30.592 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e6e61eaa-f919-4f1c-99c4-ae9f19bec45b 00:10:30.592 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bcd192bb-e823-44c7-8ccc-2e25f100c47f 00:10:30.853 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:31.113 00:10:31.113 real 0m17.655s 00:10:31.113 user 0m45.893s 00:10:31.113 sys 0m3.141s 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:31.113 ************************************ 00:10:31.113 END TEST lvs_grow_dirty 00:10:31.113 ************************************ 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:31.113 nvmf_trace.0 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.113 rmmod nvme_tcp 00:10:31.113 rmmod nvme_fabrics 00:10:31.113 rmmod nvme_keyring 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 2482436 ']' 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 2482436 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2482436 ']' 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2482436 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:10:31.113 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:31.114 17:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2482436 00:10:31.375 17:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:31.375 17:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:31.375 17:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2482436' 00:10:31.375 killing process with pid 2482436 00:10:31.375 17:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2482436 00:10:31.375 17:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2482436 00:10:31.375 17:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:31.375 17:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:31.375 17:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:31.375 17:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:31.375 17:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:10:31.375 17:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:31.375 17:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:10:31.375 17:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:31.375 17:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:31.375 17:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.375 17:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.375 17:37:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:33.923 00:10:33.923 real 0m44.902s 00:10:33.923 user 1m7.833s 00:10:33.923 sys 0m10.769s 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:33.923 ************************************ 00:10:33.923 END TEST nvmf_lvs_grow 00:10:33.923 ************************************ 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:33.923 ************************************ 00:10:33.923 START TEST nvmf_bdev_io_wait 00:10:33.923 ************************************ 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:33.923 * Looking for test storage... 00:10:33.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:33.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.923 --rc genhtml_branch_coverage=1 00:10:33.923 --rc genhtml_function_coverage=1 00:10:33.923 --rc genhtml_legend=1 00:10:33.923 --rc geninfo_all_blocks=1 00:10:33.923 --rc geninfo_unexecuted_blocks=1 00:10:33.923 00:10:33.923 ' 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:33.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.923 --rc genhtml_branch_coverage=1 00:10:33.923 --rc genhtml_function_coverage=1 00:10:33.923 --rc genhtml_legend=1 00:10:33.923 --rc geninfo_all_blocks=1 00:10:33.923 --rc geninfo_unexecuted_blocks=1 00:10:33.923 00:10:33.923 ' 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:33.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.923 --rc genhtml_branch_coverage=1 00:10:33.923 --rc genhtml_function_coverage=1 00:10:33.923 --rc genhtml_legend=1 00:10:33.923 --rc geninfo_all_blocks=1 00:10:33.923 --rc geninfo_unexecuted_blocks=1 00:10:33.923 00:10:33.923 ' 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:33.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.923 --rc genhtml_branch_coverage=1 00:10:33.923 --rc genhtml_function_coverage=1 00:10:33.923 --rc genhtml_legend=1 00:10:33.923 --rc geninfo_all_blocks=1 00:10:33.923 --rc geninfo_unexecuted_blocks=1 00:10:33.923 00:10:33.923 ' 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:33.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.923 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:33.924 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:33.924 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:33.924 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.924 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.924 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.924 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:33.924 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:33.924 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:33.924 17:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:42.069 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:42.069 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:42.069 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:42.069 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:42.070 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:42.070 17:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:42.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:10:42.070 00:10:42.070 --- 10.0.0.2 ping statistics --- 00:10:42.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.070 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:42.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:10:42.070 00:10:42.070 --- 10.0.0.1 ping statistics --- 00:10:42.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.070 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=2487470 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 2487470 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2487470 ']' 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:42.070 [2024-11-20 17:37:41.133502] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:42.070 [2024-11-20 17:37:41.133566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.070 [2024-11-20 17:37:41.221328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.070 [2024-11-20 17:37:41.270706] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.070 [2024-11-20 17:37:41.270767] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.070 [2024-11-20 17:37:41.270779] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.070 [2024-11-20 17:37:41.270789] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.070 [2024-11-20 17:37:41.270798] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.070 [2024-11-20 17:37:41.270960] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.070 [2024-11-20 17:37:41.271115] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.070 [2024-11-20 17:37:41.271272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.070 [2024-11-20 17:37:41.271273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:42.070 17:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:42.332 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.332 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:42.332 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.332 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:42.332 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.332 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:42.332 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.332 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:42.333 [2024-11-20 17:37:42.090515] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:42.333 Malloc0 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:42.333 [2024-11-20 17:37:42.165943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2487627 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2487629 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:42.333 { 00:10:42.333 "params": { 00:10:42.333 "name": "Nvme$subsystem", 00:10:42.333 "trtype": "$TEST_TRANSPORT", 00:10:42.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:42.333 "adrfam": "ipv4", 00:10:42.333 "trsvcid": "$NVMF_PORT", 00:10:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:42.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:42.333 "hdgst": ${hdgst:-false}, 00:10:42.333 "ddgst": ${ddgst:-false} 00:10:42.333 }, 00:10:42.333 "method": "bdev_nvme_attach_controller" 00:10:42.333 } 00:10:42.333 EOF 00:10:42.333 )") 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2487631 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:42.333 { 00:10:42.333 "params": { 00:10:42.333 "name": "Nvme$subsystem", 00:10:42.333 "trtype": "$TEST_TRANSPORT", 00:10:42.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:42.333 "adrfam": "ipv4", 00:10:42.333 "trsvcid": "$NVMF_PORT", 00:10:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:42.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:42.333 "hdgst": ${hdgst:-false}, 00:10:42.333 "ddgst": ${ddgst:-false} 00:10:42.333 }, 00:10:42.333 "method": "bdev_nvme_attach_controller" 00:10:42.333 } 00:10:42.333 EOF 00:10:42.333 )") 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2487634 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:42.333 { 00:10:42.333 "params": { 00:10:42.333 "name": "Nvme$subsystem", 00:10:42.333 "trtype": "$TEST_TRANSPORT", 00:10:42.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:42.333 "adrfam": "ipv4", 00:10:42.333 "trsvcid": "$NVMF_PORT", 00:10:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:42.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:42.333 "hdgst": ${hdgst:-false}, 00:10:42.333 "ddgst": ${ddgst:-false} 00:10:42.333 }, 00:10:42.333 "method": "bdev_nvme_attach_controller" 00:10:42.333 } 00:10:42.333 EOF 00:10:42.333 )") 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:42.333 { 00:10:42.333 "params": { 00:10:42.333 "name": "Nvme$subsystem", 00:10:42.333 "trtype": "$TEST_TRANSPORT", 00:10:42.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:42.333 "adrfam": "ipv4", 00:10:42.333 "trsvcid": "$NVMF_PORT", 00:10:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:42.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:42.333 "hdgst": ${hdgst:-false}, 00:10:42.333 "ddgst": ${ddgst:-false} 00:10:42.333 }, 00:10:42.333 "method": "bdev_nvme_attach_controller" 00:10:42.333 } 00:10:42.333 EOF 00:10:42.333 )") 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2487627 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:42.333 "params": { 00:10:42.333 "name": "Nvme1", 00:10:42.333 "trtype": "tcp", 00:10:42.333 "traddr": "10.0.0.2", 00:10:42.333 "adrfam": "ipv4", 00:10:42.333 "trsvcid": "4420", 00:10:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:42.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:42.333 "hdgst": false, 00:10:42.333 "ddgst": false 00:10:42.333 }, 00:10:42.333 "method": "bdev_nvme_attach_controller" 00:10:42.333 }' 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:42.333 "params": { 00:10:42.333 "name": "Nvme1", 00:10:42.333 "trtype": "tcp", 00:10:42.333 "traddr": "10.0.0.2", 00:10:42.333 "adrfam": "ipv4", 00:10:42.333 "trsvcid": "4420", 00:10:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:42.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:42.333 "hdgst": false, 00:10:42.333 "ddgst": false 00:10:42.333 }, 00:10:42.333 "method": "bdev_nvme_attach_controller" 00:10:42.333 }' 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:42.333 "params": { 00:10:42.333 "name": "Nvme1", 00:10:42.333 "trtype": "tcp", 00:10:42.333 "traddr": "10.0.0.2", 00:10:42.333 "adrfam": "ipv4", 00:10:42.333 "trsvcid": "4420", 00:10:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:42.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:42.333 "hdgst": false, 00:10:42.333 "ddgst": false 00:10:42.333 }, 00:10:42.333 "method": "bdev_nvme_attach_controller" 00:10:42.333 }' 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:42.333 17:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:42.333 "params": { 00:10:42.333 "name": "Nvme1", 00:10:42.333 "trtype": "tcp", 00:10:42.333 "traddr": "10.0.0.2", 00:10:42.333 "adrfam": "ipv4", 00:10:42.333 "trsvcid": "4420", 00:10:42.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:42.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:42.333 "hdgst": false, 00:10:42.333 "ddgst": false 00:10:42.333 }, 00:10:42.333 "method": "bdev_nvme_attach_controller" 00:10:42.333 }' 00:10:42.333 [2024-11-20 17:37:42.223332] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:42.333 [2024-11-20 17:37:42.223406] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:42.333 [2024-11-20 17:37:42.223545] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:42.333 [2024-11-20 17:37:42.223608] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:42.333 [2024-11-20 17:37:42.224809] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:42.333 [2024-11-20 17:37:42.224873] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:42.333 [2024-11-20 17:37:42.229359] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:42.333 [2024-11-20 17:37:42.229426] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:42.595 [2024-11-20 17:37:42.436196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.596 [2024-11-20 17:37:42.465010] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:42.856 [2024-11-20 17:37:42.527064] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.856 [2024-11-20 17:37:42.555637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:10:42.856 [2024-11-20 17:37:42.623814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.856 [2024-11-20 17:37:42.656463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:42.856 [2024-11-20 17:37:42.676261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.857 [2024-11-20 17:37:42.702094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:43.117 Running I/O for 1 seconds... 00:10:43.378 Running I/O for 1 seconds... 00:10:43.378 Running I/O for 1 seconds... 00:10:43.378 Running I/O for 1 seconds... 00:10:44.322 10819.00 IOPS, 42.26 MiB/s 00:10:44.323 Latency(us) 00:10:44.323 [2024-11-20T16:37:44.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.323 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:44.323 Nvme1n1 : 1.01 10875.45 42.48 0.00 0.00 11724.44 4833.28 15837.87 00:10:44.323 [2024-11-20T16:37:44.239Z] =================================================================================================================== 00:10:44.323 [2024-11-20T16:37:44.239Z] Total : 10875.45 42.48 0.00 0.00 11724.44 4833.28 15837.87 00:10:44.323 10192.00 IOPS, 39.81 MiB/s 00:10:44.323 Latency(us) 00:10:44.323 [2024-11-20T16:37:44.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.323 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:44.323 Nvme1n1 : 1.01 10258.18 40.07 0.00 0.00 12433.57 4068.69 18350.08 00:10:44.323 [2024-11-20T16:37:44.239Z] =================================================================================================================== 00:10:44.323 [2024-11-20T16:37:44.239Z] Total : 10258.18 40.07 0.00 0.00 12433.57 4068.69 18350.08 00:10:44.323 9683.00 IOPS, 37.82 MiB/s [2024-11-20T16:37:44.239Z] 188048.00 IOPS, 734.56 MiB/s 00:10:44.323 Latency(us) 00:10:44.323 [2024-11-20T16:37:44.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.323 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:44.323 Nvme1n1 : 1.00 187668.46 733.08 0.00 0.00 677.91 314.03 1979.73 00:10:44.323 [2024-11-20T16:37:44.239Z] =================================================================================================================== 00:10:44.323 [2024-11-20T16:37:44.239Z] Total : 187668.46 733.08 0.00 0.00 677.91 314.03 1979.73 00:10:44.323 00:10:44.323 Latency(us) 00:10:44.323 [2024-11-20T16:37:44.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.323 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:44.323 Nvme1n1 : 1.01 9745.41 38.07 0.00 0.00 13089.13 5133.65 26105.17 00:10:44.323 [2024-11-20T16:37:44.239Z] =================================================================================================================== 00:10:44.323 [2024-11-20T16:37:44.239Z] Total : 9745.41 38.07 0.00 0.00 13089.13 5133.65 26105.17 00:10:44.323 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2487629 00:10:44.584 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2487631 00:10:44.584 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2487634 00:10:44.584 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:44.584 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.584 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:44.584 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.584 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:44.584 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:44.584 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:44.584 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:44.584 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:44.584 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:44.584 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.584 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:44.584 rmmod nvme_tcp 00:10:44.584 rmmod nvme_fabrics 00:10:44.584 rmmod nvme_keyring 00:10:44.584 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:44.584 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:44.584 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:44.585 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 2487470 ']' 00:10:44.585 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 2487470 00:10:44.585 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2487470 ']' 00:10:44.585 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2487470 00:10:44.585 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:44.585 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:44.585 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2487470 00:10:44.585 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:44.585 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:44.585 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2487470' 00:10:44.585 killing process with pid 2487470 00:10:44.585 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2487470 00:10:44.585 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2487470 00:10:44.846 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:44.846 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:44.846 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:44.846 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:44.846 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:10:44.846 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:44.846 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:10:44.846 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:44.846 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:44.846 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.846 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.846 17:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.393 17:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:47.393 00:10:47.393 real 0m13.409s 00:10:47.393 user 0m20.878s 00:10:47.393 sys 0m7.734s 00:10:47.393 17:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.393 17:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:47.393 ************************************ 00:10:47.393 END TEST nvmf_bdev_io_wait 00:10:47.393 ************************************ 00:10:47.393 17:37:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:47.393 17:37:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:47.393 17:37:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:47.393 17:37:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:47.393 ************************************ 00:10:47.393 START TEST nvmf_queue_depth 00:10:47.393 ************************************ 00:10:47.393 17:37:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:47.393 * Looking for test storage... 00:10:47.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.393 17:37:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:47.393 17:37:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:10:47.393 17:37:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:47.393 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:47.393 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.393 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.393 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.393 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:47.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.394 --rc genhtml_branch_coverage=1 00:10:47.394 --rc genhtml_function_coverage=1 00:10:47.394 --rc genhtml_legend=1 00:10:47.394 --rc geninfo_all_blocks=1 00:10:47.394 --rc geninfo_unexecuted_blocks=1 00:10:47.394 00:10:47.394 ' 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:47.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.394 --rc genhtml_branch_coverage=1 00:10:47.394 --rc genhtml_function_coverage=1 00:10:47.394 --rc genhtml_legend=1 00:10:47.394 --rc geninfo_all_blocks=1 00:10:47.394 --rc geninfo_unexecuted_blocks=1 00:10:47.394 00:10:47.394 ' 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:47.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.394 --rc genhtml_branch_coverage=1 00:10:47.394 --rc genhtml_function_coverage=1 00:10:47.394 --rc genhtml_legend=1 00:10:47.394 --rc geninfo_all_blocks=1 00:10:47.394 --rc geninfo_unexecuted_blocks=1 00:10:47.394 00:10:47.394 ' 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:47.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.394 --rc genhtml_branch_coverage=1 00:10:47.394 --rc genhtml_function_coverage=1 00:10:47.394 --rc genhtml_legend=1 00:10:47.394 --rc geninfo_all_blocks=1 00:10:47.394 --rc geninfo_unexecuted_blocks=1 00:10:47.394 00:10:47.394 ' 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:47.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:47.394 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.395 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.395 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.395 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:47.395 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:47.395 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:47.395 17:37:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:55.583 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:55.583 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:55.583 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:55.583 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:10:55.583 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:55.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:10:55.584 00:10:55.584 --- 10.0.0.2 ping statistics --- 00:10:55.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.584 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:10:55.584 00:10:55.584 --- 10.0.0.1 ping statistics --- 00:10:55.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.584 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=2492337 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 2492337 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2492337 ']' 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:55.584 17:37:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.584 [2024-11-20 17:37:54.667718] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:55.584 [2024-11-20 17:37:54.667786] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.584 [2024-11-20 17:37:54.759438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.584 [2024-11-20 17:37:54.806175] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.584 [2024-11-20 17:37:54.806230] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.584 [2024-11-20 17:37:54.806239] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.584 [2024-11-20 17:37:54.806246] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.584 [2024-11-20 17:37:54.806252] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.584 [2024-11-20 17:37:54.806276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.846 [2024-11-20 17:37:55.548106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.846 Malloc0 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.846 [2024-11-20 17:37:55.616962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2492682 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2492682 /var/tmp/bdevperf.sock 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2492682 ']' 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:55.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:55.846 17:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:55.846 [2024-11-20 17:37:55.674642] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:55.846 [2024-11-20 17:37:55.674705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492682 ] 00:10:55.846 [2024-11-20 17:37:55.755255] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.107 [2024-11-20 17:37:55.802103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.678 17:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:56.679 17:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:56.679 17:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:56.679 17:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.679 17:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:56.940 NVMe0n1 00:10:56.940 17:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.940 17:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:56.940 Running I/O for 10 seconds... 00:10:58.826 8206.00 IOPS, 32.05 MiB/s [2024-11-20T16:38:00.126Z] 9734.00 IOPS, 38.02 MiB/s [2024-11-20T16:38:01.070Z] 10397.33 IOPS, 40.61 MiB/s [2024-11-20T16:38:02.017Z] 10850.00 IOPS, 42.38 MiB/s [2024-11-20T16:38:02.958Z] 11361.60 IOPS, 44.38 MiB/s [2024-11-20T16:38:03.900Z] 11744.50 IOPS, 45.88 MiB/s [2024-11-20T16:38:04.843Z] 11997.86 IOPS, 46.87 MiB/s [2024-11-20T16:38:05.784Z] 12182.50 IOPS, 47.59 MiB/s [2024-11-20T16:38:07.169Z] 12396.22 IOPS, 48.42 MiB/s [2024-11-20T16:38:07.169Z] 12494.80 IOPS, 48.81 MiB/s 00:11:07.253 Latency(us) 00:11:07.253 [2024-11-20T16:38:07.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:07.253 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:07.253 Verification LBA range: start 0x0 length 0x4000 00:11:07.253 NVMe0n1 : 10.05 12531.86 48.95 0.00 0.00 81455.37 16056.32 74274.13 00:11:07.253 [2024-11-20T16:38:07.169Z] =================================================================================================================== 00:11:07.253 [2024-11-20T16:38:07.169Z] Total : 12531.86 48.95 0.00 0.00 81455.37 16056.32 74274.13 00:11:07.253 { 00:11:07.253 "results": [ 00:11:07.253 { 00:11:07.253 "job": "NVMe0n1", 00:11:07.253 "core_mask": "0x1", 00:11:07.253 "workload": "verify", 00:11:07.253 "status": "finished", 00:11:07.253 "verify_range": { 00:11:07.253 "start": 0, 00:11:07.253 "length": 16384 00:11:07.253 }, 00:11:07.253 "queue_depth": 1024, 00:11:07.253 "io_size": 4096, 00:11:07.253 "runtime": 10.051501, 00:11:07.253 "iops": 12531.859669516025, 00:11:07.253 "mibps": 48.95257683404697, 00:11:07.253 "io_failed": 0, 00:11:07.253 "io_timeout": 0, 00:11:07.253 "avg_latency_us": 81455.37348110043, 00:11:07.253 "min_latency_us": 16056.32, 00:11:07.253 "max_latency_us": 74274.13333333333 00:11:07.253 } 00:11:07.253 ], 00:11:07.253 "core_count": 1 00:11:07.253 } 00:11:07.253 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2492682 00:11:07.253 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2492682 ']' 00:11:07.253 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2492682 00:11:07.253 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:07.253 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:07.253 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2492682 00:11:07.253 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:07.253 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:07.253 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2492682' 00:11:07.253 killing process with pid 2492682 00:11:07.253 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2492682 00:11:07.253 Received shutdown signal, test time was about 10.000000 seconds 00:11:07.253 00:11:07.253 Latency(us) 00:11:07.253 [2024-11-20T16:38:07.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:07.253 [2024-11-20T16:38:07.170Z] =================================================================================================================== 00:11:07.254 [2024-11-20T16:38:07.170Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:07.254 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2492682 00:11:07.254 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:07.254 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:07.254 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:07.254 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:07.254 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:07.254 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:07.254 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.254 17:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:07.254 rmmod nvme_tcp 00:11:07.254 rmmod nvme_fabrics 00:11:07.254 rmmod nvme_keyring 00:11:07.254 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.254 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:07.254 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:07.254 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 2492337 ']' 00:11:07.254 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 2492337 00:11:07.254 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2492337 ']' 00:11:07.254 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2492337 00:11:07.254 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:07.254 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:07.254 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2492337 00:11:07.254 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:07.254 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:07.254 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2492337' 00:11:07.254 killing process with pid 2492337 00:11:07.254 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2492337 00:11:07.254 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2492337 00:11:07.516 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:07.516 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:07.516 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:07.516 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:07.516 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:11:07.516 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:07.516 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:11:07.516 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:07.516 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:07.516 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.516 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.516 17:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.428 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:09.428 00:11:09.428 real 0m22.506s 00:11:09.428 user 0m25.687s 00:11:09.428 sys 0m7.138s 00:11:09.428 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.428 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.428 ************************************ 00:11:09.428 END TEST nvmf_queue_depth 00:11:09.428 ************************************ 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:09.690 ************************************ 00:11:09.690 START TEST nvmf_target_multipath 00:11:09.690 ************************************ 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:09.690 * Looking for test storage... 00:11:09.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.690 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:09.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.690 --rc genhtml_branch_coverage=1 00:11:09.690 --rc genhtml_function_coverage=1 00:11:09.690 --rc genhtml_legend=1 00:11:09.690 --rc geninfo_all_blocks=1 00:11:09.690 --rc geninfo_unexecuted_blocks=1 00:11:09.690 00:11:09.690 ' 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:09.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.951 --rc genhtml_branch_coverage=1 00:11:09.951 --rc genhtml_function_coverage=1 00:11:09.951 --rc genhtml_legend=1 00:11:09.951 --rc geninfo_all_blocks=1 00:11:09.951 --rc geninfo_unexecuted_blocks=1 00:11:09.951 00:11:09.951 ' 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:09.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.951 --rc genhtml_branch_coverage=1 00:11:09.951 --rc genhtml_function_coverage=1 00:11:09.951 --rc genhtml_legend=1 00:11:09.951 --rc geninfo_all_blocks=1 00:11:09.951 --rc geninfo_unexecuted_blocks=1 00:11:09.951 00:11:09.951 ' 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:09.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.951 --rc genhtml_branch_coverage=1 00:11:09.951 --rc genhtml_function_coverage=1 00:11:09.951 --rc genhtml_legend=1 00:11:09.951 --rc geninfo_all_blocks=1 00:11:09.951 --rc geninfo_unexecuted_blocks=1 00:11:09.951 00:11:09.951 ' 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.951 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.952 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:09.952 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:09.952 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:09.952 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:09.952 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.952 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:09.952 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:09.952 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:09.952 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.952 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.952 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.952 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:09.952 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:09.952 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.952 17:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:18.091 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:18.091 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:18.091 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:18.091 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:18.091 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:18.092 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:18.092 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.092 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.092 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.092 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:18.092 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.092 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.092 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:18.092 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:18.092 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.092 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.092 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:18.092 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:18.092 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.092 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.092 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.092 17:38:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:18.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:11:18.092 00:11:18.092 --- 10.0.0.2 ping statistics --- 00:11:18.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.092 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:11:18.092 00:11:18.092 --- 10.0.0.1 ping statistics --- 00:11:18.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.092 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:18.092 only one NIC for nvmf test 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.092 rmmod nvme_tcp 00:11:18.092 rmmod nvme_fabrics 00:11:18.092 rmmod nvme_keyring 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.092 17:38:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:19.564 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:11:19.565 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:19.565 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:19.565 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.565 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.565 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.565 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.565 00:11:19.565 real 0m9.987s 00:11:19.565 user 0m2.113s 00:11:19.565 sys 0m5.807s 00:11:19.565 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:19.565 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:19.565 ************************************ 00:11:19.565 END TEST nvmf_target_multipath 00:11:19.565 ************************************ 00:11:19.565 17:38:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:19.565 17:38:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:19.565 17:38:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.565 17:38:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:19.827 ************************************ 00:11:19.827 START TEST nvmf_zcopy 00:11:19.827 ************************************ 00:11:19.827 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:19.827 * Looking for test storage... 00:11:19.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.827 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:19.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.828 --rc genhtml_branch_coverage=1 00:11:19.828 --rc genhtml_function_coverage=1 00:11:19.828 --rc genhtml_legend=1 00:11:19.828 --rc geninfo_all_blocks=1 00:11:19.828 --rc geninfo_unexecuted_blocks=1 00:11:19.828 00:11:19.828 ' 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:19.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.828 --rc genhtml_branch_coverage=1 00:11:19.828 --rc genhtml_function_coverage=1 00:11:19.828 --rc genhtml_legend=1 00:11:19.828 --rc geninfo_all_blocks=1 00:11:19.828 --rc geninfo_unexecuted_blocks=1 00:11:19.828 00:11:19.828 ' 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:19.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.828 --rc genhtml_branch_coverage=1 00:11:19.828 --rc genhtml_function_coverage=1 00:11:19.828 --rc genhtml_legend=1 00:11:19.828 --rc geninfo_all_blocks=1 00:11:19.828 --rc geninfo_unexecuted_blocks=1 00:11:19.828 00:11:19.828 ' 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:19.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.828 --rc genhtml_branch_coverage=1 00:11:19.828 --rc genhtml_function_coverage=1 00:11:19.828 --rc genhtml_legend=1 00:11:19.828 --rc geninfo_all_blocks=1 00:11:19.828 --rc geninfo_unexecuted_blocks=1 00:11:19.828 00:11:19.828 ' 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.828 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.829 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.829 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.829 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.829 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.829 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:19.829 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:19.829 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.829 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:19.829 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:19.829 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:19.829 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.829 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.829 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.829 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:19.829 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:19.829 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.829 17:38:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:27.971 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:27.971 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:27.971 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:27.971 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.971 17:38:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.971 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.971 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.971 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:27.971 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.971 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.971 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:27.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:11:27.972 00:11:27.972 --- 10.0.0.2 ping statistics --- 00:11:27.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.972 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:11:27.972 00:11:27.972 --- 10.0.0.1 ping statistics --- 00:11:27.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.972 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=2503385 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 2503385 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2503385 ']' 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:27.972 17:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:27.972 [2024-11-20 17:38:27.348251] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:27.972 [2024-11-20 17:38:27.348315] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.972 [2024-11-20 17:38:27.435523] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.972 [2024-11-20 17:38:27.481738] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.972 [2024-11-20 17:38:27.481793] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.972 [2024-11-20 17:38:27.481802] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.972 [2024-11-20 17:38:27.481810] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.972 [2024-11-20 17:38:27.481817] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.972 [2024-11-20 17:38:27.481841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.546 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:28.546 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:11:28.546 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:28.546 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.547 [2024-11-20 17:38:28.219835] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.547 [2024-11-20 17:38:28.244096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.547 malloc0 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:28.547 { 00:11:28.547 "params": { 00:11:28.547 "name": "Nvme$subsystem", 00:11:28.547 "trtype": "$TEST_TRANSPORT", 00:11:28.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:28.547 "adrfam": "ipv4", 00:11:28.547 "trsvcid": "$NVMF_PORT", 00:11:28.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:28.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:28.547 "hdgst": ${hdgst:-false}, 00:11:28.547 "ddgst": ${ddgst:-false} 00:11:28.547 }, 00:11:28.547 "method": "bdev_nvme_attach_controller" 00:11:28.547 } 00:11:28.547 EOF 00:11:28.547 )") 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:11:28.547 17:38:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:28.547 "params": { 00:11:28.547 "name": "Nvme1", 00:11:28.547 "trtype": "tcp", 00:11:28.547 "traddr": "10.0.0.2", 00:11:28.547 "adrfam": "ipv4", 00:11:28.547 "trsvcid": "4420", 00:11:28.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:28.547 "hdgst": false, 00:11:28.547 "ddgst": false 00:11:28.547 }, 00:11:28.547 "method": "bdev_nvme_attach_controller" 00:11:28.547 }' 00:11:28.547 [2024-11-20 17:38:28.357324] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:28.547 [2024-11-20 17:38:28.357388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2503743 ] 00:11:28.547 [2024-11-20 17:38:28.437821] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.808 [2024-11-20 17:38:28.484478] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.068 Running I/O for 10 seconds... 00:11:30.956 7889.00 IOPS, 61.63 MiB/s [2024-11-20T16:38:31.816Z] 8814.50 IOPS, 68.86 MiB/s [2024-11-20T16:38:33.202Z] 9125.00 IOPS, 71.29 MiB/s [2024-11-20T16:38:34.145Z] 9286.75 IOPS, 72.55 MiB/s [2024-11-20T16:38:35.087Z] 9382.00 IOPS, 73.30 MiB/s [2024-11-20T16:38:36.030Z] 9444.50 IOPS, 73.79 MiB/s [2024-11-20T16:38:36.978Z] 9490.57 IOPS, 74.15 MiB/s [2024-11-20T16:38:37.920Z] 9522.12 IOPS, 74.39 MiB/s [2024-11-20T16:38:38.865Z] 9549.56 IOPS, 74.61 MiB/s [2024-11-20T16:38:38.865Z] 9566.90 IOPS, 74.74 MiB/s 00:11:38.949 Latency(us) 00:11:38.949 [2024-11-20T16:38:38.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.949 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:38.949 Verification LBA range: start 0x0 length 0x1000 00:11:38.949 Nvme1n1 : 10.01 9569.41 74.76 0.00 0.00 13329.02 2430.29 27197.44 00:11:38.949 [2024-11-20T16:38:38.865Z] =================================================================================================================== 00:11:38.949 [2024-11-20T16:38:38.865Z] Total : 9569.41 74.76 0.00 0.00 13329.02 2430.29 27197.44 00:11:39.210 17:38:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2505766 00:11:39.210 17:38:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:39.210 17:38:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:39.210 17:38:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:39.210 17:38:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:39.210 17:38:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:11:39.210 17:38:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:11:39.210 17:38:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:39.210 17:38:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:39.210 { 00:11:39.210 "params": { 00:11:39.210 "name": "Nvme$subsystem", 00:11:39.210 "trtype": "$TEST_TRANSPORT", 00:11:39.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:39.210 "adrfam": "ipv4", 00:11:39.210 "trsvcid": "$NVMF_PORT", 00:11:39.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:39.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:39.210 "hdgst": ${hdgst:-false}, 00:11:39.210 "ddgst": ${ddgst:-false} 00:11:39.210 }, 00:11:39.210 "method": "bdev_nvme_attach_controller" 00:11:39.210 } 00:11:39.210 EOF 00:11:39.210 )") 00:11:39.210 [2024-11-20 17:38:38.946426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.210 [2024-11-20 17:38:38.946454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.210 17:38:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:11:39.210 17:38:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:11:39.210 17:38:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:11:39.210 17:38:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:39.210 "params": { 00:11:39.210 "name": "Nvme1", 00:11:39.210 "trtype": "tcp", 00:11:39.210 "traddr": "10.0.0.2", 00:11:39.210 "adrfam": "ipv4", 00:11:39.210 "trsvcid": "4420", 00:11:39.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:39.210 "hdgst": false, 00:11:39.210 "ddgst": false 00:11:39.210 }, 00:11:39.210 "method": "bdev_nvme_attach_controller" 00:11:39.210 }' 00:11:39.210 [2024-11-20 17:38:38.958426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.210 [2024-11-20 17:38:38.958436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.210 [2024-11-20 17:38:38.970453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.210 [2024-11-20 17:38:38.970460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.210 [2024-11-20 17:38:38.982483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.210 [2024-11-20 17:38:38.982490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.210 [2024-11-20 17:38:38.994514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.210 [2024-11-20 17:38:38.994521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.210 [2024-11-20 17:38:38.998946] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:39.210 [2024-11-20 17:38:38.999024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505766 ] 00:11:39.210 [2024-11-20 17:38:39.006545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.210 [2024-11-20 17:38:39.006553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.210 [2024-11-20 17:38:39.018575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.210 [2024-11-20 17:38:39.018582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.210 [2024-11-20 17:38:39.030606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.210 [2024-11-20 17:38:39.030613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.210 [2024-11-20 17:38:39.042637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.210 [2024-11-20 17:38:39.042644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.210 [2024-11-20 17:38:39.054667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.210 [2024-11-20 17:38:39.054674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.210 [2024-11-20 17:38:39.066697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.210 [2024-11-20 17:38:39.066704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.210 [2024-11-20 17:38:39.075036] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.210 [2024-11-20 17:38:39.078729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.210 [2024-11-20 17:38:39.078736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.210 [2024-11-20 17:38:39.090762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.210 [2024-11-20 17:38:39.090776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.210 [2024-11-20 17:38:39.102785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.210 [2024-11-20 17:38:39.102791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.211 [2024-11-20 17:38:39.102806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.211 [2024-11-20 17:38:39.114825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.211 [2024-11-20 17:38:39.114835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.474 [2024-11-20 17:38:39.126860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.474 [2024-11-20 17:38:39.126873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.474 [2024-11-20 17:38:39.138884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.474 [2024-11-20 17:38:39.138893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.474 [2024-11-20 17:38:39.150914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.474 [2024-11-20 17:38:39.150925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.474 [2024-11-20 17:38:39.162945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.474 [2024-11-20 17:38:39.162953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.474 [2024-11-20 17:38:39.174985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.474 [2024-11-20 17:38:39.175000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.474 [2024-11-20 17:38:39.187010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.474 [2024-11-20 17:38:39.187019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.474 [2024-11-20 17:38:39.199039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.474 [2024-11-20 17:38:39.199047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.474 [2024-11-20 17:38:39.211075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.474 [2024-11-20 17:38:39.211081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.474 [2024-11-20 17:38:39.223100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.474 [2024-11-20 17:38:39.223106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.474 [2024-11-20 17:38:39.235132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.474 [2024-11-20 17:38:39.235140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.474 [2024-11-20 17:38:39.247169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.474 [2024-11-20 17:38:39.247179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.474 [2024-11-20 17:38:39.259198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.474 [2024-11-20 17:38:39.259207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.474 [2024-11-20 17:38:39.271234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.474 [2024-11-20 17:38:39.271248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.474 Running I/O for 5 seconds... 00:11:39.474 [2024-11-20 17:38:39.283256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.474 [2024-11-20 17:38:39.283263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.474 [2024-11-20 17:38:39.298321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.474 [2024-11-20 17:38:39.298337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.475 [2024-11-20 17:38:39.311674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.475 [2024-11-20 17:38:39.311690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.475 [2024-11-20 17:38:39.324566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.475 [2024-11-20 17:38:39.324582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.475 [2024-11-20 17:38:39.337772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.475 [2024-11-20 17:38:39.337787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.475 [2024-11-20 17:38:39.350069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.475 [2024-11-20 17:38:39.350085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.475 [2024-11-20 17:38:39.362737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.475 [2024-11-20 17:38:39.362752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.475 [2024-11-20 17:38:39.376414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.475 [2024-11-20 17:38:39.376429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.736 [2024-11-20 17:38:39.389214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.736 [2024-11-20 17:38:39.389229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.736 [2024-11-20 17:38:39.402139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.736 [2024-11-20 17:38:39.402154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.736 [2024-11-20 17:38:39.414754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.736 [2024-11-20 17:38:39.414769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.736 [2024-11-20 17:38:39.427378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.736 [2024-11-20 17:38:39.427392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.736 [2024-11-20 17:38:39.440151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.736 [2024-11-20 17:38:39.440169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.736 [2024-11-20 17:38:39.453259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.736 [2024-11-20 17:38:39.453274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.736 [2024-11-20 17:38:39.466519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.736 [2024-11-20 17:38:39.466534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.736 [2024-11-20 17:38:39.479982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.736 [2024-11-20 17:38:39.479996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.736 [2024-11-20 17:38:39.492949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.736 [2024-11-20 17:38:39.492964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.736 [2024-11-20 17:38:39.506278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.736 [2024-11-20 17:38:39.506294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.736 [2024-11-20 17:38:39.519240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.736 [2024-11-20 17:38:39.519255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.736 [2024-11-20 17:38:39.531741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.736 [2024-11-20 17:38:39.531756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.736 [2024-11-20 17:38:39.545241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.736 [2024-11-20 17:38:39.545256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.736 [2024-11-20 17:38:39.557550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.736 [2024-11-20 17:38:39.557565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.736 [2024-11-20 17:38:39.570085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.736 [2024-11-20 17:38:39.570100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.736 [2024-11-20 17:38:39.583721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.736 [2024-11-20 17:38:39.583735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.736 [2024-11-20 17:38:39.597451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.737 [2024-11-20 17:38:39.597466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.737 [2024-11-20 17:38:39.610048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.737 [2024-11-20 17:38:39.610063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.737 [2024-11-20 17:38:39.623057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.737 [2024-11-20 17:38:39.623072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.737 [2024-11-20 17:38:39.635963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.737 [2024-11-20 17:38:39.635977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.737 [2024-11-20 17:38:39.649350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.737 [2024-11-20 17:38:39.649365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.662601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.662617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.676144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.676165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.689472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.689487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.702602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.702617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.715426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.715441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.728011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.728026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.740709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.740724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.753536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.753551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.767224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.767238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.780056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.780071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.792648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.792663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.805046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.805061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.818623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.818638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.832126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.832142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.845226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.845242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.858904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.858927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.871875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.871890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.885357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.885372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.998 [2024-11-20 17:38:39.898919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.998 [2024-11-20 17:38:39.898934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.259 [2024-11-20 17:38:39.911787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.259 [2024-11-20 17:38:39.911802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.259 [2024-11-20 17:38:39.924544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.259 [2024-11-20 17:38:39.924559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.259 [2024-11-20 17:38:39.937808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.259 [2024-11-20 17:38:39.937822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.259 [2024-11-20 17:38:39.951363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.259 [2024-11-20 17:38:39.951377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.259 [2024-11-20 17:38:39.964264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.259 [2024-11-20 17:38:39.964279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.259 [2024-11-20 17:38:39.977598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.259 [2024-11-20 17:38:39.977612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.259 [2024-11-20 17:38:39.990944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.259 [2024-11-20 17:38:39.990958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.259 [2024-11-20 17:38:40.004391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.259 [2024-11-20 17:38:40.004407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.259 [2024-11-20 17:38:40.017357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.259 [2024-11-20 17:38:40.017371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.259 [2024-11-20 17:38:40.030818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.259 [2024-11-20 17:38:40.030833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.259 [2024-11-20 17:38:40.044272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.259 [2024-11-20 17:38:40.044287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.259 [2024-11-20 17:38:40.057620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.259 [2024-11-20 17:38:40.057635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.259 [2024-11-20 17:38:40.070843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.259 [2024-11-20 17:38:40.070859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.259 [2024-11-20 17:38:40.083907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.259 [2024-11-20 17:38:40.083922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.259 [2024-11-20 17:38:40.097478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.259 [2024-11-20 17:38:40.097493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.259 [2024-11-20 17:38:40.109939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.260 [2024-11-20 17:38:40.109957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.260 [2024-11-20 17:38:40.123353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.260 [2024-11-20 17:38:40.123368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.260 [2024-11-20 17:38:40.136792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.260 [2024-11-20 17:38:40.136806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.260 [2024-11-20 17:38:40.150369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.260 [2024-11-20 17:38:40.150384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.260 [2024-11-20 17:38:40.163833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.260 [2024-11-20 17:38:40.163848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.176694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.176709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.190197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.190211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.203603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.203617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.216600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.216614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.230251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.230265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.244106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.244121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.256797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.256812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.269504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.269518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.282847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.282862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 19153.00 IOPS, 149.63 MiB/s [2024-11-20T16:38:40.437Z] [2024-11-20 17:38:40.295903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.295917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.309444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.309458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.322291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.322305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.335489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.335503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.348914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.348929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.361737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.361755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.373902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.373916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.387483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.387497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.399983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.399997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.412587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.412601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.521 [2024-11-20 17:38:40.425526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.521 [2024-11-20 17:38:40.425541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.438980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.438995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.451966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.451980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.465678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.465693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.478990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.479005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.492164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.492178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.505697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.505712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.519383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.519398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.531847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.531861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.544342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.544356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.557699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.557714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.571364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.571379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.584057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.584071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.597439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.597453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.611050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.611064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.624099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.624113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.637412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.637427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.649901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.649915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.662240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.662254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.675031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.675045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.783 [2024-11-20 17:38:40.688423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.783 [2024-11-20 17:38:40.688438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.701901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.701915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.715242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.715256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.728722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.728736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.742135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.742149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.755058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.755073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.768028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.768043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.781505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.781520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.793961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.793975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.806807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.806821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.820501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.820516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.833704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.833719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.847130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.847144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.860448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.860463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.873970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.873985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.886638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.886652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.899212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.899227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.912104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.912118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.925410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.925425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.938330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.938344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.045 [2024-11-20 17:38:40.951395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.045 [2024-11-20 17:38:40.951409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:40.964792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:40.964807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:40.977524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:40.977538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:40.990958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:40.990972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:41.004402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:41.004416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:41.017324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:41.017339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:41.029576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:41.029591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:41.041840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:41.041854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:41.054960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:41.054974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:41.068202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:41.068217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:41.081193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:41.081208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:41.093913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:41.093928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:41.106152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:41.106170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:41.119542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:41.119556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:41.132583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:41.132598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:41.145918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:41.145933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:41.159382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:41.159397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:41.172401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:41.172416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.306 [2024-11-20 17:38:41.185155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.306 [2024-11-20 17:38:41.185173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.307 [2024-11-20 17:38:41.197541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.307 [2024-11-20 17:38:41.197556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.307 [2024-11-20 17:38:41.210842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.307 [2024-11-20 17:38:41.210856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.568 [2024-11-20 17:38:41.223949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.568 [2024-11-20 17:38:41.223964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.568 [2024-11-20 17:38:41.237413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.568 [2024-11-20 17:38:41.237428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.568 [2024-11-20 17:38:41.250877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.568 [2024-11-20 17:38:41.250892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.568 [2024-11-20 17:38:41.264193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.568 [2024-11-20 17:38:41.264208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.568 [2024-11-20 17:38:41.277832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.568 [2024-11-20 17:38:41.277847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.568 19237.00 IOPS, 150.29 MiB/s [2024-11-20T16:38:41.484Z] [2024-11-20 17:38:41.291402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.568 [2024-11-20 17:38:41.291417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.568 [2024-11-20 17:38:41.304282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.568 [2024-11-20 17:38:41.304298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.568 [2024-11-20 17:38:41.317908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.568 [2024-11-20 17:38:41.317923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.568 [2024-11-20 17:38:41.330466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.568 [2024-11-20 17:38:41.330480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.568 [2024-11-20 17:38:41.343796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.568 [2024-11-20 17:38:41.343814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.568 [2024-11-20 17:38:41.356606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.568 [2024-11-20 17:38:41.356621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.568 [2024-11-20 17:38:41.369853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.568 [2024-11-20 17:38:41.369867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.568 [2024-11-20 17:38:41.382528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.568 [2024-11-20 17:38:41.382543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.568 [2024-11-20 17:38:41.394974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.568 [2024-11-20 17:38:41.394989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.568 [2024-11-20 17:38:41.407334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.568 [2024-11-20 17:38:41.407349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.568 [2024-11-20 17:38:41.420635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.568 [2024-11-20 17:38:41.420649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.568 [2024-11-20 17:38:41.433398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.568 [2024-11-20 17:38:41.433412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.569 [2024-11-20 17:38:41.446188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.569 [2024-11-20 17:38:41.446202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.569 [2024-11-20 17:38:41.459867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.569 [2024-11-20 17:38:41.459883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.569 [2024-11-20 17:38:41.472299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.569 [2024-11-20 17:38:41.472314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.829 [2024-11-20 17:38:41.485475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.829 [2024-11-20 17:38:41.485489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.829 [2024-11-20 17:38:41.498618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.829 [2024-11-20 17:38:41.498632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.829 [2024-11-20 17:38:41.511305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.511319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.830 [2024-11-20 17:38:41.523803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.523817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.830 [2024-11-20 17:38:41.536482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.536498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.830 [2024-11-20 17:38:41.549186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.549201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.830 [2024-11-20 17:38:41.562074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.562088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.830 [2024-11-20 17:38:41.575184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.575198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.830 [2024-11-20 17:38:41.588252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.588271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.830 [2024-11-20 17:38:41.601074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.601089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.830 [2024-11-20 17:38:41.614368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.614383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.830 [2024-11-20 17:38:41.627879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.627894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.830 [2024-11-20 17:38:41.641535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.641551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.830 [2024-11-20 17:38:41.654858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.654873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.830 [2024-11-20 17:38:41.668361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.668376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.830 [2024-11-20 17:38:41.681733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.681748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.830 [2024-11-20 17:38:41.695428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.695443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.830 [2024-11-20 17:38:41.709093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.709108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.830 [2024-11-20 17:38:41.721709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.721723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.830 [2024-11-20 17:38:41.734688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.830 [2024-11-20 17:38:41.734702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.748068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.748083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.760855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.760869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.774136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.774152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.787701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.787716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.801215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.801230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.814716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.814731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.827439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.827453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.840565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.840584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.853768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.853783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.866709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.866724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.879912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.879928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.892893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.892909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.905810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.905825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.918581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.918595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.931613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.931628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.944925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.944941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.958345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.958360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.971688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.971703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.985074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.985089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.091 [2024-11-20 17:38:41.998164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.091 [2024-11-20 17:38:41.998179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.351 [2024-11-20 17:38:42.011328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.351 [2024-11-20 17:38:42.011343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.351 [2024-11-20 17:38:42.023979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.351 [2024-11-20 17:38:42.023994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.036560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.036574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.049063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.049078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.061663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.061677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.074875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.074889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.088021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.088039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.101491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.101505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.114513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.114528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.127833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.127847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.140822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.140836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.154385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.154399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.167797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.167811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.181373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.181388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.194078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.194093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.207551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.207565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.221174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.221188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.234991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.235006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.248414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.248429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.352 [2024-11-20 17:38:42.261461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.352 [2024-11-20 17:38:42.261476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.274814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.274829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.287620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.287634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 19249.33 IOPS, 150.39 MiB/s [2024-11-20T16:38:42.529Z] [2024-11-20 17:38:42.300891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.300907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.314380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.314394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.327835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.327849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.341390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.341404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.353943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.353959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.367366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.367381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.380642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.380657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.394114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.394129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.407675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.407689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.420765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.420780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.433459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.433473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.447027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.447041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.459370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.459384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.472065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.472080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.485109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.485123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.498125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.498139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.511295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.511310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.613 [2024-11-20 17:38:42.523766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.613 [2024-11-20 17:38:42.523781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.874 [2024-11-20 17:38:42.537531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.874 [2024-11-20 17:38:42.537545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.874 [2024-11-20 17:38:42.550777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.874 [2024-11-20 17:38:42.550791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.874 [2024-11-20 17:38:42.563701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.874 [2024-11-20 17:38:42.563716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.874 [2024-11-20 17:38:42.577334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.874 [2024-11-20 17:38:42.577349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.874 [2024-11-20 17:38:42.590721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.874 [2024-11-20 17:38:42.590736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.874 [2024-11-20 17:38:42.603868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.874 [2024-11-20 17:38:42.603883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.874 [2024-11-20 17:38:42.617124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.874 [2024-11-20 17:38:42.617139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.874 [2024-11-20 17:38:42.629784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.874 [2024-11-20 17:38:42.629798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.874 [2024-11-20 17:38:42.642950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.874 [2024-11-20 17:38:42.642965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 [2024-11-20 17:38:42.656371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-11-20 17:38:42.656386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 [2024-11-20 17:38:42.669489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-11-20 17:38:42.669504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 [2024-11-20 17:38:42.682306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-11-20 17:38:42.682321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 [2024-11-20 17:38:42.694641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-11-20 17:38:42.694656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 [2024-11-20 17:38:42.708205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-11-20 17:38:42.708220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 [2024-11-20 17:38:42.721428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-11-20 17:38:42.721443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 [2024-11-20 17:38:42.734525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-11-20 17:38:42.734540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 [2024-11-20 17:38:42.748093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-11-20 17:38:42.748108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 [2024-11-20 17:38:42.761102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-11-20 17:38:42.761116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 [2024-11-20 17:38:42.773783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-11-20 17:38:42.773798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 [2024-11-20 17:38:42.787563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-11-20 17:38:42.787578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:42.800281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:42.800296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:42.813656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:42.813671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:42.826574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:42.826593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:42.839457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:42.839471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:42.852533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:42.852548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:42.866023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:42.866037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:42.878840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:42.878855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:42.892317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:42.892332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:42.905736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:42.905751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:42.919153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:42.919173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:42.932210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:42.932225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:42.944909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:42.944923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:42.958216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:42.958231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:42.971639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:42.971654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:42.984348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:42.984363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:42.997693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:42.997708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:43.011133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:43.011149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:43.023731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:43.023746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:43.036984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:43.036999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.137 [2024-11-20 17:38:43.050245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.137 [2024-11-20 17:38:43.050260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.063219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.063234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.076200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.076219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.089230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.089245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.102605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.102620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.115975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.115990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.129601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.129616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.142016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.142030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.154851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.154866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.168598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.168613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.181512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.181527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.194126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.194141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.206905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.206919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.219853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.219868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.233278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.233292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.246339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.246353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.259750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.259765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.273018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.273033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 [2024-11-20 17:38:43.286627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.286642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.398 19279.50 IOPS, 150.62 MiB/s [2024-11-20T16:38:43.314Z] [2024-11-20 17:38:43.299984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.398 [2024-11-20 17:38:43.299999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.313273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.313288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.326690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.326709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.340134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.340149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.353051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.353066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.365679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.365693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.378937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.378952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.391680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.391695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.404715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.404729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.417283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.417298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.430592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.430607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.443419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.443434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.455909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.455923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.468965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.468981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.481778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.481793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.495188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.495203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.508278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.508293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.521623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.521638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.534617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.534632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.547613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.547628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.659 [2024-11-20 17:38:43.560943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.659 [2024-11-20 17:38:43.560958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.574005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.574021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.586814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.586830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.598938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.598953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.612397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.612412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.625878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.625893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.638745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.638760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.651449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.651465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.665024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.665039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.678144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.678165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.691186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.691200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.703680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.703695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.717366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.717381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.729947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.729961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.743401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.743416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.756655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.756669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.769633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.769647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.782395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.782409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.795886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.795901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.808399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.808413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.919 [2024-11-20 17:38:43.821140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.919 [2024-11-20 17:38:43.821155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.179 [2024-11-20 17:38:43.834590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:43.834605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:43.847939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:43.847953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:43.861294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:43.861309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:43.874264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:43.874279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:43.887208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:43.887223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:43.900182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:43.900196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:43.913285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:43.913300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:43.926953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:43.926968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:43.939536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:43.939551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:43.952800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:43.952815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:43.966239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:43.966253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:43.979500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:43.979514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:43.992222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:43.992237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:44.004535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:44.004549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:44.018042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:44.018057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:44.030827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:44.030842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:44.043566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:44.043580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:44.056889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:44.056904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:44.069956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:44.069971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.180 [2024-11-20 17:38:44.083169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.180 [2024-11-20 17:38:44.083183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.096707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.096722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.109856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.109870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.123185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.123199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.136762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.136776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.149496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.149511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.162927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.162942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.175977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.175992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.189305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.189320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.202257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.202272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.215324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.215339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.228005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.228020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.240989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.241003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.253401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.253415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.265804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.265818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.278907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.278922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.291421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.291435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 19272.80 IOPS, 150.57 MiB/s 00:11:44.440 Latency(us) 00:11:44.440 [2024-11-20T16:38:44.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.440 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:44.440 Nvme1n1 : 5.00 19285.83 150.67 0.00 0.00 6632.80 2812.59 14745.60 00:11:44.440 [2024-11-20T16:38:44.356Z] =================================================================================================================== 00:11:44.440 [2024-11-20T16:38:44.356Z] Total : 19285.83 150.67 0.00 0.00 6632.80 2812.59 14745.60 00:11:44.440 [2024-11-20 17:38:44.301517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.301531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.313548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.313560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.440 [2024-11-20 17:38:44.325580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.440 [2024-11-20 17:38:44.325595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.441 [2024-11-20 17:38:44.337611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.441 [2024-11-20 17:38:44.337621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.441 [2024-11-20 17:38:44.349638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.441 [2024-11-20 17:38:44.349649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.701 [2024-11-20 17:38:44.361666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.701 [2024-11-20 17:38:44.361677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.701 [2024-11-20 17:38:44.373699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.701 [2024-11-20 17:38:44.373709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.701 [2024-11-20 17:38:44.385729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.701 [2024-11-20 17:38:44.385740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.701 [2024-11-20 17:38:44.397758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.701 [2024-11-20 17:38:44.397767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.701 [2024-11-20 17:38:44.409805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.701 [2024-11-20 17:38:44.409814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2505766) - No such process 00:11:44.701 17:38:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2505766 00:11:44.701 17:38:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.701 17:38:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.702 17:38:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.702 17:38:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.702 17:38:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:44.702 17:38:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.702 17:38:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.702 delay0 00:11:44.702 17:38:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.702 17:38:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:44.702 17:38:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.702 17:38:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.702 17:38:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.702 17:38:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:44.702 [2024-11-20 17:38:44.608334] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:52.844 Initializing NVMe Controllers 00:11:52.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:52.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:52.844 Initialization complete. Launching workers. 00:11:52.844 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 247, failed: 32483 00:11:52.844 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 32613, failed to submit 117 00:11:52.844 success 32529, unsuccessful 84, failed 0 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:52.844 rmmod nvme_tcp 00:11:52.844 rmmod nvme_fabrics 00:11:52.844 rmmod nvme_keyring 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 2503385 ']' 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 2503385 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2503385 ']' 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2503385 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2503385 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2503385' 00:11:52.844 killing process with pid 2503385 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2503385 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2503385 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.844 17:38:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.229 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:54.229 00:11:54.229 real 0m34.564s 00:11:54.229 user 0m45.416s 00:11:54.229 sys 0m11.867s 00:11:54.229 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.229 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:54.229 ************************************ 00:11:54.229 END TEST nvmf_zcopy 00:11:54.229 ************************************ 00:11:54.229 17:38:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:54.229 17:38:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:54.229 17:38:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.229 17:38:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:54.229 ************************************ 00:11:54.229 START TEST nvmf_nmic 00:11:54.229 ************************************ 00:11:54.229 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:54.491 * Looking for test storage... 00:11:54.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:54.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.491 --rc genhtml_branch_coverage=1 00:11:54.491 --rc genhtml_function_coverage=1 00:11:54.491 --rc genhtml_legend=1 00:11:54.491 --rc geninfo_all_blocks=1 00:11:54.491 --rc geninfo_unexecuted_blocks=1 00:11:54.491 00:11:54.491 ' 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:54.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.491 --rc genhtml_branch_coverage=1 00:11:54.491 --rc genhtml_function_coverage=1 00:11:54.491 --rc genhtml_legend=1 00:11:54.491 --rc geninfo_all_blocks=1 00:11:54.491 --rc geninfo_unexecuted_blocks=1 00:11:54.491 00:11:54.491 ' 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:54.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.491 --rc genhtml_branch_coverage=1 00:11:54.491 --rc genhtml_function_coverage=1 00:11:54.491 --rc genhtml_legend=1 00:11:54.491 --rc geninfo_all_blocks=1 00:11:54.491 --rc geninfo_unexecuted_blocks=1 00:11:54.491 00:11:54.491 ' 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:54.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.491 --rc genhtml_branch_coverage=1 00:11:54.491 --rc genhtml_function_coverage=1 00:11:54.491 --rc genhtml_legend=1 00:11:54.491 --rc geninfo_all_blocks=1 00:11:54.491 --rc geninfo_unexecuted_blocks=1 00:11:54.491 00:11:54.491 ' 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:54.491 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:54.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:54.492 17:38:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:02.637 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:02.638 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:02.638 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:02.638 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:02.638 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:02.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.511 ms 00:12:02.638 00:12:02.638 --- 10.0.0.2 ping statistics --- 00:12:02.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.638 rtt min/avg/max/mdev = 0.511/0.511/0.511/0.000 ms 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:02.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:12:02.638 00:12:02.638 --- 10.0.0.1 ping statistics --- 00:12:02.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.638 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=2512552 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 2512552 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2512552 ']' 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:02.638 17:39:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:02.638 [2024-11-20 17:39:01.775972] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:02.638 [2024-11-20 17:39:01.776024] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.638 [2024-11-20 17:39:01.859787] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.638 [2024-11-20 17:39:01.910488] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.638 [2024-11-20 17:39:01.910552] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.638 [2024-11-20 17:39:01.910564] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.638 [2024-11-20 17:39:01.910575] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.638 [2024-11-20 17:39:01.910583] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.638 [2024-11-20 17:39:01.910752] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.638 [2024-11-20 17:39:01.910914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.639 [2024-11-20 17:39:01.911075] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.639 [2024-11-20 17:39:01.911076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:02.900 [2024-11-20 17:39:02.628703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:02.900 Malloc0 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:02.900 [2024-11-20 17:39:02.687920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.900 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:02.901 test case1: single bdev can't be used in multiple subsystems 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:02.901 [2024-11-20 17:39:02.723850] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:02.901 [2024-11-20 17:39:02.723874] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:02.901 [2024-11-20 17:39:02.723886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.901 request: 00:12:02.901 { 00:12:02.901 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:02.901 "namespace": { 00:12:02.901 "bdev_name": "Malloc0", 00:12:02.901 "no_auto_visible": false 00:12:02.901 }, 00:12:02.901 "method": "nvmf_subsystem_add_ns", 00:12:02.901 "req_id": 1 00:12:02.901 } 00:12:02.901 Got JSON-RPC error response 00:12:02.901 response: 00:12:02.901 { 00:12:02.901 "code": -32602, 00:12:02.901 "message": "Invalid parameters" 00:12:02.901 } 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:02.901 Adding namespace failed - expected result. 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:02.901 test case2: host connect to nvmf target in multiple paths 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:02.901 [2024-11-20 17:39:02.736016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.901 17:39:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.817 17:39:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:06.201 17:39:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:06.201 17:39:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:12:06.201 17:39:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.201 17:39:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:06.201 17:39:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:12:08.115 17:39:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:08.115 17:39:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:08.115 17:39:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:08.115 17:39:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:08.115 17:39:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:08.115 17:39:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:12:08.115 17:39:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:08.115 [global] 00:12:08.115 thread=1 00:12:08.115 invalidate=1 00:12:08.115 rw=write 00:12:08.115 time_based=1 00:12:08.115 runtime=1 00:12:08.115 ioengine=libaio 00:12:08.115 direct=1 00:12:08.115 bs=4096 00:12:08.115 iodepth=1 00:12:08.115 norandommap=0 00:12:08.115 numjobs=1 00:12:08.115 00:12:08.115 verify_dump=1 00:12:08.115 verify_backlog=512 00:12:08.115 verify_state_save=0 00:12:08.115 do_verify=1 00:12:08.115 verify=crc32c-intel 00:12:08.115 [job0] 00:12:08.115 filename=/dev/nvme0n1 00:12:08.115 Could not set queue depth (nvme0n1) 00:12:08.376 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.376 fio-3.35 00:12:08.376 Starting 1 thread 00:12:09.761 00:12:09.761 job0: (groupid=0, jobs=1): err= 0: pid=2514109: Wed Nov 20 17:39:09 2024 00:12:09.761 read: IOPS=20, BW=82.5KiB/s (84.5kB/s)(84.0KiB/1018msec) 00:12:09.761 slat (nsec): min=25859, max=30421, avg=27045.52, stdev=1142.90 00:12:09.761 clat (usec): min=818, max=42395, avg=35939.88, stdev=14667.97 00:12:09.761 lat (usec): min=847, max=42422, avg=35966.92, stdev=14666.97 00:12:09.761 clat percentiles (usec): 00:12:09.761 | 1.00th=[ 816], 5.00th=[ 865], 10.00th=[ 979], 20.00th=[41157], 00:12:09.761 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:12:09.761 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:09.761 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:09.761 | 99.99th=[42206] 00:12:09.761 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:12:09.761 slat (usec): min=9, max=28918, avg=84.41, stdev=1276.85 00:12:09.761 clat (usec): min=221, max=806, avg=421.50, stdev=79.18 00:12:09.761 lat (usec): min=233, max=29260, avg=505.91, stdev=1276.09 00:12:09.761 clat percentiles (usec): 00:12:09.761 | 1.00th=[ 235], 5.00th=[ 281], 10.00th=[ 314], 20.00th=[ 351], 00:12:09.761 | 30.00th=[ 371], 40.00th=[ 420], 50.00th=[ 437], 60.00th=[ 457], 00:12:09.761 | 70.00th=[ 469], 80.00th=[ 478], 90.00th=[ 502], 95.00th=[ 515], 00:12:09.761 | 99.00th=[ 627], 99.50th=[ 676], 99.90th=[ 807], 99.95th=[ 807], 00:12:09.761 | 99.99th=[ 807] 00:12:09.761 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:12:09.761 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:09.761 lat (usec) : 250=1.50%, 500=84.80%, 750=9.57%, 1000=0.75% 00:12:09.761 lat (msec) : 50=3.38% 00:12:09.761 cpu : usr=0.49%, sys=1.67%, ctx=538, majf=0, minf=1 00:12:09.761 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:09.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.761 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.761 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:09.761 00:12:09.761 Run status group 0 (all jobs): 00:12:09.761 READ: bw=82.5KiB/s (84.5kB/s), 82.5KiB/s-82.5KiB/s (84.5kB/s-84.5kB/s), io=84.0KiB (86.0kB), run=1018-1018msec 00:12:09.761 WRITE: bw=2012KiB/s (2060kB/s), 2012KiB/s-2012KiB/s (2060kB/s-2060kB/s), io=2048KiB (2097kB), run=1018-1018msec 00:12:09.761 00:12:09.761 Disk stats (read/write): 00:12:09.761 nvme0n1: ios=43/512, merge=0/0, ticks=1601/208, in_queue=1809, util=98.70% 00:12:09.761 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:09.761 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.761 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:12:09.761 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:09.761 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.761 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:09.761 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.761 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:12:09.761 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:09.762 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:09.762 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:09.762 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:09.762 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:09.762 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:09.762 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:09.762 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:09.762 rmmod nvme_tcp 00:12:09.762 rmmod nvme_fabrics 00:12:09.762 rmmod nvme_keyring 00:12:09.762 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:09.762 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:09.762 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:09.762 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 2512552 ']' 00:12:09.762 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 2512552 00:12:09.762 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2512552 ']' 00:12:09.762 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2512552 00:12:09.762 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:12:09.762 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:09.762 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2512552 00:12:10.023 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:10.023 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:10.023 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2512552' 00:12:10.023 killing process with pid 2512552 00:12:10.023 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2512552 00:12:10.023 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2512552 00:12:10.023 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:10.023 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:10.023 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:10.023 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:10.023 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:12:10.023 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:10.023 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:12:10.023 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:10.023 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:10.023 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.023 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.023 17:39:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.572 17:39:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:12.573 00:12:12.573 real 0m17.772s 00:12:12.573 user 0m45.793s 00:12:12.573 sys 0m6.444s 00:12:12.573 17:39:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.573 17:39:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.573 ************************************ 00:12:12.573 END TEST nvmf_nmic 00:12:12.573 ************************************ 00:12:12.573 17:39:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:12.573 17:39:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:12.573 17:39:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.573 17:39:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:12.573 ************************************ 00:12:12.573 START TEST nvmf_fio_target 00:12:12.573 ************************************ 00:12:12.573 17:39:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:12.573 * Looking for test storage... 00:12:12.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:12.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.573 --rc genhtml_branch_coverage=1 00:12:12.573 --rc genhtml_function_coverage=1 00:12:12.573 --rc genhtml_legend=1 00:12:12.573 --rc geninfo_all_blocks=1 00:12:12.573 --rc geninfo_unexecuted_blocks=1 00:12:12.573 00:12:12.573 ' 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:12.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.573 --rc genhtml_branch_coverage=1 00:12:12.573 --rc genhtml_function_coverage=1 00:12:12.573 --rc genhtml_legend=1 00:12:12.573 --rc geninfo_all_blocks=1 00:12:12.573 --rc geninfo_unexecuted_blocks=1 00:12:12.573 00:12:12.573 ' 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:12.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.573 --rc genhtml_branch_coverage=1 00:12:12.573 --rc genhtml_function_coverage=1 00:12:12.573 --rc genhtml_legend=1 00:12:12.573 --rc geninfo_all_blocks=1 00:12:12.573 --rc geninfo_unexecuted_blocks=1 00:12:12.573 00:12:12.573 ' 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:12.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.573 --rc genhtml_branch_coverage=1 00:12:12.573 --rc genhtml_function_coverage=1 00:12:12.573 --rc genhtml_legend=1 00:12:12.573 --rc geninfo_all_blocks=1 00:12:12.573 --rc geninfo_unexecuted_blocks=1 00:12:12.573 00:12:12.573 ' 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.573 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:12:12.574 17:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:20.721 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:20.721 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:20.721 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:20.721 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.721 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:20.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:12:20.722 00:12:20.722 --- 10.0.0.2 ping statistics --- 00:12:20.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.722 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:12:20.722 00:12:20.722 --- 10.0.0.1 ping statistics --- 00:12:20.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.722 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=2519105 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 2519105 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2519105 ']' 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:20.722 17:39:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.722 [2024-11-20 17:39:19.811745] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:20.722 [2024-11-20 17:39:19.811813] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.722 [2024-11-20 17:39:19.897895] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.722 [2024-11-20 17:39:19.946816] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.722 [2024-11-20 17:39:19.946868] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.722 [2024-11-20 17:39:19.946877] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.722 [2024-11-20 17:39:19.946884] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.722 [2024-11-20 17:39:19.946890] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.722 [2024-11-20 17:39:19.947042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.722 [2024-11-20 17:39:19.947217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.722 [2024-11-20 17:39:19.947301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.722 [2024-11-20 17:39:19.947301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.983 17:39:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:20.983 17:39:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:12:20.983 17:39:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:20.983 17:39:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:20.983 17:39:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.983 17:39:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.983 17:39:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:20.983 [2024-11-20 17:39:20.856000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.245 17:39:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:21.245 17:39:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:21.245 17:39:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:21.506 17:39:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:21.506 17:39:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:21.800 17:39:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:21.800 17:39:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:22.135 17:39:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:22.135 17:39:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:22.135 17:39:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:22.433 17:39:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:22.433 17:39:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:22.694 17:39:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:22.694 17:39:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:22.955 17:39:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:22.955 17:39:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:22.955 17:39:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:23.215 17:39:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:23.215 17:39:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:23.477 17:39:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:23.477 17:39:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:23.477 17:39:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.737 [2024-11-20 17:39:23.527114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.737 17:39:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:23.998 17:39:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:24.259 17:39:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.644 17:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:25.644 17:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:25.644 17:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.644 17:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:25.644 17:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:25.644 17:39:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:28.190 17:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:28.190 17:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:28.190 17:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.190 17:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:28.190 17:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.190 17:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:28.190 17:39:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:28.190 [global] 00:12:28.190 thread=1 00:12:28.190 invalidate=1 00:12:28.190 rw=write 00:12:28.190 time_based=1 00:12:28.190 runtime=1 00:12:28.190 ioengine=libaio 00:12:28.190 direct=1 00:12:28.190 bs=4096 00:12:28.190 iodepth=1 00:12:28.190 norandommap=0 00:12:28.190 numjobs=1 00:12:28.190 00:12:28.190 verify_dump=1 00:12:28.190 verify_backlog=512 00:12:28.190 verify_state_save=0 00:12:28.190 do_verify=1 00:12:28.190 verify=crc32c-intel 00:12:28.190 [job0] 00:12:28.190 filename=/dev/nvme0n1 00:12:28.190 [job1] 00:12:28.190 filename=/dev/nvme0n2 00:12:28.190 [job2] 00:12:28.190 filename=/dev/nvme0n3 00:12:28.190 [job3] 00:12:28.190 filename=/dev/nvme0n4 00:12:28.190 Could not set queue depth (nvme0n1) 00:12:28.190 Could not set queue depth (nvme0n2) 00:12:28.190 Could not set queue depth (nvme0n3) 00:12:28.190 Could not set queue depth (nvme0n4) 00:12:28.190 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:28.190 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:28.190 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:28.190 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:28.190 fio-3.35 00:12:28.190 Starting 4 threads 00:12:29.609 00:12:29.609 job0: (groupid=0, jobs=1): err= 0: pid=2520890: Wed Nov 20 17:39:29 2024 00:12:29.610 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:29.610 slat (nsec): min=24758, max=65494, avg=25692.88, stdev=2640.32 00:12:29.610 clat (usec): min=644, max=1199, avg=983.86, stdev=89.22 00:12:29.610 lat (usec): min=669, max=1225, avg=1009.55, stdev=89.11 00:12:29.610 clat percentiles (usec): 00:12:29.610 | 1.00th=[ 750], 5.00th=[ 799], 10.00th=[ 873], 20.00th=[ 914], 00:12:29.610 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1020], 00:12:29.610 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:12:29.610 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1205], 99.95th=[ 1205], 00:12:29.610 | 99.99th=[ 1205] 00:12:29.610 write: IOPS=765, BW=3061KiB/s (3134kB/s)(3064KiB/1001msec); 0 zone resets 00:12:29.610 slat (nsec): min=9472, max=62564, avg=29175.72, stdev=9363.05 00:12:29.610 clat (usec): min=211, max=925, avg=589.36, stdev=113.02 00:12:29.610 lat (usec): min=221, max=958, avg=618.54, stdev=116.72 00:12:29.610 clat percentiles (usec): 00:12:29.610 | 1.00th=[ 293], 5.00th=[ 379], 10.00th=[ 449], 20.00th=[ 494], 00:12:29.610 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 619], 00:12:29.610 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 766], 00:12:29.610 | 99.00th=[ 840], 99.50th=[ 889], 99.90th=[ 922], 99.95th=[ 922], 00:12:29.610 | 99.99th=[ 922] 00:12:29.610 bw ( KiB/s): min= 4096, max= 4096, per=34.02%, avg=4096.00, stdev= 0.00, samples=1 00:12:29.610 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:29.610 lat (usec) : 250=0.23%, 500=12.68%, 750=43.66%, 1000=24.02% 00:12:29.610 lat (msec) : 2=19.41% 00:12:29.610 cpu : usr=2.40%, sys=3.20%, ctx=1278, majf=0, minf=1 00:12:29.610 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:29.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.610 issued rwts: total=512,766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.610 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:29.610 job1: (groupid=0, jobs=1): err= 0: pid=2520907: Wed Nov 20 17:39:29 2024 00:12:29.610 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:29.610 slat (nsec): min=7482, max=44630, avg=25807.27, stdev=1932.95 00:12:29.610 clat (usec): min=533, max=1311, avg=961.96, stdev=109.49 00:12:29.610 lat (usec): min=558, max=1336, avg=987.77, stdev=109.32 00:12:29.610 clat percentiles (usec): 00:12:29.610 | 1.00th=[ 668], 5.00th=[ 775], 10.00th=[ 807], 20.00th=[ 873], 00:12:29.610 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 979], 60.00th=[ 1004], 00:12:29.610 | 70.00th=[ 1020], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:12:29.610 | 99.00th=[ 1172], 99.50th=[ 1221], 99.90th=[ 1319], 99.95th=[ 1319], 00:12:29.610 | 99.99th=[ 1319] 00:12:29.610 write: IOPS=812, BW=3249KiB/s (3327kB/s)(3252KiB/1001msec); 0 zone resets 00:12:29.610 slat (nsec): min=9526, max=53276, avg=31241.67, stdev=8030.21 00:12:29.610 clat (usec): min=131, max=917, avg=563.87, stdev=126.19 00:12:29.610 lat (usec): min=145, max=949, avg=595.11, stdev=128.71 00:12:29.610 clat percentiles (usec): 00:12:29.610 | 1.00th=[ 265], 5.00th=[ 351], 10.00th=[ 383], 20.00th=[ 457], 00:12:29.610 | 30.00th=[ 494], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 611], 00:12:29.610 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 717], 95.00th=[ 750], 00:12:29.610 | 99.00th=[ 824], 99.50th=[ 840], 99.90th=[ 914], 99.95th=[ 914], 00:12:29.610 | 99.99th=[ 914] 00:12:29.610 bw ( KiB/s): min= 4096, max= 4096, per=34.02%, avg=4096.00, stdev= 0.00, samples=1 00:12:29.610 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:29.610 lat (usec) : 250=0.23%, 500=19.25%, 750=40.23%, 1000=24.75% 00:12:29.610 lat (msec) : 2=15.55% 00:12:29.610 cpu : usr=1.80%, sys=4.30%, ctx=1325, majf=0, minf=1 00:12:29.610 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:29.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.610 issued rwts: total=512,813,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.610 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:29.610 job2: (groupid=0, jobs=1): err= 0: pid=2520927: Wed Nov 20 17:39:29 2024 00:12:29.610 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:29.610 slat (nsec): min=5860, max=44371, avg=22172.42, stdev=7928.87 00:12:29.610 clat (usec): min=516, max=1460, avg=943.55, stdev=131.73 00:12:29.610 lat (usec): min=522, max=1487, avg=965.72, stdev=136.19 00:12:29.610 clat percentiles (usec): 00:12:29.610 | 1.00th=[ 652], 5.00th=[ 734], 10.00th=[ 775], 20.00th=[ 816], 00:12:29.610 | 30.00th=[ 873], 40.00th=[ 922], 50.00th=[ 955], 60.00th=[ 988], 00:12:29.610 | 70.00th=[ 1012], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1139], 00:12:29.610 | 99.00th=[ 1270], 99.50th=[ 1401], 99.90th=[ 1467], 99.95th=[ 1467], 00:12:29.610 | 99.99th=[ 1467] 00:12:29.610 write: IOPS=924, BW=3696KiB/s (3785kB/s)(3700KiB/1001msec); 0 zone resets 00:12:29.610 slat (nsec): min=6470, max=69478, avg=22482.09, stdev=14530.93 00:12:29.610 clat (usec): min=171, max=1008, avg=514.00, stdev=144.78 00:12:29.610 lat (usec): min=178, max=1044, avg=536.48, stdev=153.65 00:12:29.610 clat percentiles (usec): 00:12:29.610 | 1.00th=[ 231], 5.00th=[ 277], 10.00th=[ 318], 20.00th=[ 388], 00:12:29.610 | 30.00th=[ 433], 40.00th=[ 474], 50.00th=[ 506], 60.00th=[ 545], 00:12:29.610 | 70.00th=[ 594], 80.00th=[ 644], 90.00th=[ 709], 95.00th=[ 766], 00:12:29.610 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 1012], 99.95th=[ 1012], 00:12:29.610 | 99.99th=[ 1012] 00:12:29.610 bw ( KiB/s): min= 4096, max= 4096, per=34.02%, avg=4096.00, stdev= 0.00, samples=1 00:12:29.610 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:29.610 lat (usec) : 250=1.11%, 500=30.34%, 750=31.32%, 1000=24.63% 00:12:29.610 lat (msec) : 2=12.60% 00:12:29.610 cpu : usr=1.60%, sys=3.30%, ctx=1441, majf=0, minf=1 00:12:29.610 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:29.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.610 issued rwts: total=512,925,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.610 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:29.610 job3: (groupid=0, jobs=1): err= 0: pid=2520934: Wed Nov 20 17:39:29 2024 00:12:29.610 read: IOPS=207, BW=830KiB/s (850kB/s)(832KiB/1002msec) 00:12:29.610 slat (nsec): min=25911, max=42705, avg=26850.18, stdev=1806.27 00:12:29.610 clat (usec): min=681, max=42023, avg=3254.97, stdev=8919.05 00:12:29.610 lat (usec): min=708, max=42050, avg=3281.82, stdev=8918.99 00:12:29.610 clat percentiles (usec): 00:12:29.610 | 1.00th=[ 783], 5.00th=[ 922], 10.00th=[ 988], 20.00th=[ 1074], 00:12:29.610 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1205], 00:12:29.610 | 70.00th=[ 1237], 80.00th=[ 1303], 90.00th=[ 1385], 95.00th=[33162], 00:12:29.610 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:29.610 | 99.99th=[42206] 00:12:29.610 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:12:29.610 slat (nsec): min=10154, max=53987, avg=32150.09, stdev=9478.60 00:12:29.610 clat (usec): min=220, max=898, avg=576.32, stdev=132.75 00:12:29.610 lat (usec): min=230, max=942, avg=608.47, stdev=136.59 00:12:29.610 clat percentiles (usec): 00:12:29.610 | 1.00th=[ 258], 5.00th=[ 351], 10.00th=[ 408], 20.00th=[ 465], 00:12:29.610 | 30.00th=[ 515], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 611], 00:12:29.610 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 742], 95.00th=[ 791], 00:12:29.610 | 99.00th=[ 857], 99.50th=[ 889], 99.90th=[ 898], 99.95th=[ 898], 00:12:29.610 | 99.99th=[ 898] 00:12:29.610 bw ( KiB/s): min= 4096, max= 4096, per=34.02%, avg=4096.00, stdev= 0.00, samples=1 00:12:29.610 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:29.610 lat (usec) : 250=0.69%, 500=19.31%, 750=45.14%, 1000=9.17% 00:12:29.610 lat (msec) : 2=24.17%, 50=1.53% 00:12:29.610 cpu : usr=1.50%, sys=1.80%, ctx=722, majf=0, minf=1 00:12:29.610 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:29.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.610 issued rwts: total=208,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.610 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:29.610 00:12:29.610 Run status group 0 (all jobs): 00:12:29.611 READ: bw=6962KiB/s (7129kB/s), 830KiB/s-2046KiB/s (850kB/s-2095kB/s), io=6976KiB (7143kB), run=1001-1002msec 00:12:29.611 WRITE: bw=11.8MiB/s (12.3MB/s), 2044KiB/s-3696KiB/s (2093kB/s-3785kB/s), io=11.8MiB (12.4MB), run=1001-1002msec 00:12:29.611 00:12:29.611 Disk stats (read/write): 00:12:29.611 nvme0n1: ios=552/512, merge=0/0, ticks=551/289, in_queue=840, util=87.27% 00:12:29.611 nvme0n2: ios=541/533, merge=0/0, ticks=526/280, in_queue=806, util=86.50% 00:12:29.611 nvme0n3: ios=535/538, merge=0/0, ticks=1409/280, in_queue=1689, util=96.29% 00:12:29.611 nvme0n4: ios=191/512, merge=0/0, ticks=1426/277, in_queue=1703, util=96.46% 00:12:29.611 17:39:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:29.611 [global] 00:12:29.611 thread=1 00:12:29.611 invalidate=1 00:12:29.611 rw=randwrite 00:12:29.611 time_based=1 00:12:29.611 runtime=1 00:12:29.611 ioengine=libaio 00:12:29.611 direct=1 00:12:29.611 bs=4096 00:12:29.611 iodepth=1 00:12:29.611 norandommap=0 00:12:29.611 numjobs=1 00:12:29.611 00:12:29.611 verify_dump=1 00:12:29.611 verify_backlog=512 00:12:29.611 verify_state_save=0 00:12:29.611 do_verify=1 00:12:29.611 verify=crc32c-intel 00:12:29.611 [job0] 00:12:29.611 filename=/dev/nvme0n1 00:12:29.611 [job1] 00:12:29.611 filename=/dev/nvme0n2 00:12:29.611 [job2] 00:12:29.611 filename=/dev/nvme0n3 00:12:29.611 [job3] 00:12:29.611 filename=/dev/nvme0n4 00:12:29.611 Could not set queue depth (nvme0n1) 00:12:29.611 Could not set queue depth (nvme0n2) 00:12:29.611 Could not set queue depth (nvme0n3) 00:12:29.611 Could not set queue depth (nvme0n4) 00:12:29.880 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:29.880 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:29.880 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:29.880 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:29.880 fio-3.35 00:12:29.880 Starting 4 threads 00:12:31.297 00:12:31.297 job0: (groupid=0, jobs=1): err= 0: pid=2521402: Wed Nov 20 17:39:30 2024 00:12:31.297 read: IOPS=16, BW=67.5KiB/s (69.1kB/s)(68.0KiB/1008msec) 00:12:31.297 slat (nsec): min=27679, max=28638, avg=28171.41, stdev=239.71 00:12:31.297 clat (usec): min=40816, max=41065, avg=40960.30, stdev=69.99 00:12:31.297 lat (usec): min=40844, max=41094, avg=40988.47, stdev=69.95 00:12:31.297 clat percentiles (usec): 00:12:31.297 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:12:31.297 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:31.297 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:31.297 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:31.297 | 99.99th=[41157] 00:12:31.297 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:12:31.297 slat (nsec): min=9347, max=53634, avg=29904.40, stdev=9493.28 00:12:31.297 clat (usec): min=173, max=1076, avg=568.99, stdev=116.42 00:12:31.297 lat (usec): min=183, max=1114, avg=598.89, stdev=119.97 00:12:31.297 clat percentiles (usec): 00:12:31.297 | 1.00th=[ 306], 5.00th=[ 375], 10.00th=[ 404], 20.00th=[ 474], 00:12:31.297 | 30.00th=[ 510], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 603], 00:12:31.297 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 701], 95.00th=[ 734], 00:12:31.297 | 99.00th=[ 824], 99.50th=[ 898], 99.90th=[ 1074], 99.95th=[ 1074], 00:12:31.297 | 99.99th=[ 1074] 00:12:31.297 bw ( KiB/s): min= 4096, max= 4096, per=44.89%, avg=4096.00, stdev= 0.00, samples=1 00:12:31.297 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:31.297 lat (usec) : 250=0.38%, 500=26.84%, 750=65.97%, 1000=3.21% 00:12:31.297 lat (msec) : 2=0.38%, 50=3.21% 00:12:31.297 cpu : usr=0.99%, sys=1.59%, ctx=532, majf=0, minf=1 00:12:31.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.297 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:31.297 job1: (groupid=0, jobs=1): err= 0: pid=2521406: Wed Nov 20 17:39:30 2024 00:12:31.297 read: IOPS=16, BW=67.1KiB/s (68.7kB/s)(68.0KiB/1014msec) 00:12:31.297 slat (nsec): min=26161, max=27072, avg=26507.53, stdev=215.10 00:12:31.297 clat (usec): min=41907, max=42478, avg=41994.20, stdev=129.54 00:12:31.297 lat (usec): min=41933, max=42504, avg=42020.71, stdev=129.48 00:12:31.297 clat percentiles (usec): 00:12:31.297 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:12:31.297 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:31.297 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:12:31.297 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:12:31.297 | 99.99th=[42730] 00:12:31.297 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:12:31.297 slat (nsec): min=9745, max=53342, avg=31032.46, stdev=8162.66 00:12:31.297 clat (usec): min=171, max=1708, avg=544.54, stdev=135.10 00:12:31.297 lat (usec): min=181, max=1742, avg=575.58, stdev=136.93 00:12:31.297 clat percentiles (usec): 00:12:31.297 | 1.00th=[ 253], 5.00th=[ 326], 10.00th=[ 375], 20.00th=[ 429], 00:12:31.297 | 30.00th=[ 482], 40.00th=[ 515], 50.00th=[ 562], 60.00th=[ 586], 00:12:31.298 | 70.00th=[ 619], 80.00th=[ 652], 90.00th=[ 701], 95.00th=[ 734], 00:12:31.298 | 99.00th=[ 791], 99.50th=[ 816], 99.90th=[ 1713], 99.95th=[ 1713], 00:12:31.298 | 99.99th=[ 1713] 00:12:31.298 bw ( KiB/s): min= 4096, max= 4096, per=44.89%, avg=4096.00, stdev= 0.00, samples=1 00:12:31.298 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:31.298 lat (usec) : 250=0.76%, 500=34.03%, 750=58.79%, 1000=3.02% 00:12:31.298 lat (msec) : 2=0.19%, 50=3.21% 00:12:31.298 cpu : usr=0.99%, sys=1.28%, ctx=531, majf=0, minf=1 00:12:31.298 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.298 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.298 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:31.298 job2: (groupid=0, jobs=1): err= 0: pid=2521418: Wed Nov 20 17:39:30 2024 00:12:31.298 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:31.298 slat (nsec): min=26427, max=61710, avg=27270.15, stdev=2842.01 00:12:31.298 clat (usec): min=722, max=1143, avg=961.52, stdev=62.13 00:12:31.298 lat (usec): min=749, max=1171, avg=988.79, stdev=62.01 00:12:31.298 clat percentiles (usec): 00:12:31.298 | 1.00th=[ 758], 5.00th=[ 840], 10.00th=[ 881], 20.00th=[ 930], 00:12:31.298 | 30.00th=[ 947], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 979], 00:12:31.298 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1057], 00:12:31.298 | 99.00th=[ 1106], 99.50th=[ 1123], 99.90th=[ 1139], 99.95th=[ 1139], 00:12:31.298 | 99.99th=[ 1139] 00:12:31.298 write: IOPS=790, BW=3161KiB/s (3237kB/s)(3164KiB/1001msec); 0 zone resets 00:12:31.298 slat (nsec): min=8969, max=78624, avg=30076.86, stdev=9537.71 00:12:31.298 clat (usec): min=153, max=1237, avg=581.61, stdev=131.31 00:12:31.298 lat (usec): min=186, max=1269, avg=611.68, stdev=134.64 00:12:31.298 clat percentiles (usec): 00:12:31.298 | 1.00th=[ 277], 5.00th=[ 359], 10.00th=[ 412], 20.00th=[ 474], 00:12:31.298 | 30.00th=[ 510], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 619], 00:12:31.298 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 783], 00:12:31.298 | 99.00th=[ 865], 99.50th=[ 889], 99.90th=[ 1237], 99.95th=[ 1237], 00:12:31.298 | 99.99th=[ 1237] 00:12:31.298 bw ( KiB/s): min= 4096, max= 4096, per=44.89%, avg=4096.00, stdev= 0.00, samples=1 00:12:31.298 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:31.298 lat (usec) : 250=0.23%, 500=16.19%, 750=39.14%, 1000=35.69% 00:12:31.298 lat (msec) : 2=8.75% 00:12:31.298 cpu : usr=2.30%, sys=5.50%, ctx=1304, majf=0, minf=2 00:12:31.298 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.298 issued rwts: total=512,791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.298 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:31.298 job3: (groupid=0, jobs=1): err= 0: pid=2521426: Wed Nov 20 17:39:30 2024 00:12:31.298 read: IOPS=118, BW=475KiB/s (486kB/s)(484KiB/1020msec) 00:12:31.298 slat (nsec): min=24982, max=44976, avg=26460.37, stdev=2675.24 00:12:31.298 clat (usec): min=708, max=42044, avg=5674.58, stdev=13064.38 00:12:31.298 lat (usec): min=735, max=42070, avg=5701.04, stdev=13064.15 00:12:31.298 clat percentiles (usec): 00:12:31.298 | 1.00th=[ 783], 5.00th=[ 840], 10.00th=[ 865], 20.00th=[ 914], 00:12:31.298 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 996], 00:12:31.298 | 70.00th=[ 1020], 80.00th=[ 1074], 90.00th=[41157], 95.00th=[41681], 00:12:31.298 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:31.298 | 99.99th=[42206] 00:12:31.298 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:12:31.298 slat (nsec): min=9545, max=55275, avg=30153.61, stdev=7970.25 00:12:31.298 clat (usec): min=274, max=1814, avg=603.93, stdev=145.96 00:12:31.298 lat (usec): min=288, max=1846, avg=634.08, stdev=147.91 00:12:31.298 clat percentiles (usec): 00:12:31.298 | 1.00th=[ 302], 5.00th=[ 363], 10.00th=[ 412], 20.00th=[ 478], 00:12:31.298 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 652], 00:12:31.298 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 816], 00:12:31.298 | 99.00th=[ 889], 99.50th=[ 930], 99.90th=[ 1811], 99.95th=[ 1811], 00:12:31.298 | 99.99th=[ 1811] 00:12:31.298 bw ( KiB/s): min= 4096, max= 4096, per=44.89%, avg=4096.00, stdev= 0.00, samples=1 00:12:31.298 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:31.298 lat (usec) : 500=19.43%, 750=50.24%, 1000=22.75% 00:12:31.298 lat (msec) : 2=5.37%, 50=2.21% 00:12:31.298 cpu : usr=1.28%, sys=1.47%, ctx=634, majf=0, minf=2 00:12:31.298 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.298 issued rwts: total=121,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.298 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:31.298 00:12:31.298 Run status group 0 (all jobs): 00:12:31.298 READ: bw=2616KiB/s (2678kB/s), 67.1KiB/s-2046KiB/s (68.7kB/s-2095kB/s), io=2668KiB (2732kB), run=1001-1020msec 00:12:31.298 WRITE: bw=9125KiB/s (9345kB/s), 2008KiB/s-3161KiB/s (2056kB/s-3237kB/s), io=9308KiB (9531kB), run=1001-1020msec 00:12:31.298 00:12:31.298 Disk stats (read/write): 00:12:31.298 nvme0n1: ios=41/512, merge=0/0, ticks=1093/263, in_queue=1356, util=97.19% 00:12:31.298 nvme0n2: ios=62/512, merge=0/0, ticks=1104/262, in_queue=1366, util=96.84% 00:12:31.298 nvme0n3: ios=512/520, merge=0/0, ticks=472/232, in_queue=704, util=88.29% 00:12:31.298 nvme0n4: ios=116/512, merge=0/0, ticks=461/296, in_queue=757, util=89.42% 00:12:31.298 17:39:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:31.298 [global] 00:12:31.298 thread=1 00:12:31.298 invalidate=1 00:12:31.298 rw=write 00:12:31.298 time_based=1 00:12:31.298 runtime=1 00:12:31.298 ioengine=libaio 00:12:31.298 direct=1 00:12:31.298 bs=4096 00:12:31.298 iodepth=128 00:12:31.298 norandommap=0 00:12:31.298 numjobs=1 00:12:31.298 00:12:31.298 verify_dump=1 00:12:31.298 verify_backlog=512 00:12:31.298 verify_state_save=0 00:12:31.298 do_verify=1 00:12:31.298 verify=crc32c-intel 00:12:31.298 [job0] 00:12:31.298 filename=/dev/nvme0n1 00:12:31.298 [job1] 00:12:31.298 filename=/dev/nvme0n2 00:12:31.298 [job2] 00:12:31.298 filename=/dev/nvme0n3 00:12:31.298 [job3] 00:12:31.298 filename=/dev/nvme0n4 00:12:31.298 Could not set queue depth (nvme0n1) 00:12:31.298 Could not set queue depth (nvme0n2) 00:12:31.298 Could not set queue depth (nvme0n3) 00:12:31.298 Could not set queue depth (nvme0n4) 00:12:31.560 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:31.560 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:31.560 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:31.560 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:31.560 fio-3.35 00:12:31.560 Starting 4 threads 00:12:32.970 00:12:32.970 job0: (groupid=0, jobs=1): err= 0: pid=2521920: Wed Nov 20 17:39:32 2024 00:12:32.970 read: IOPS=8102, BW=31.7MiB/s (33.2MB/s)(32.0MiB/1011msec) 00:12:32.970 slat (nsec): min=921, max=19085k, avg=57411.18, stdev=495181.29 00:12:32.970 clat (usec): min=1422, max=41989, avg=7732.57, stdev=4105.25 00:12:32.970 lat (usec): min=1431, max=41995, avg=7789.98, stdev=4142.24 00:12:32.970 clat percentiles (usec): 00:12:32.970 | 1.00th=[ 2376], 5.00th=[ 3621], 10.00th=[ 4817], 20.00th=[ 5342], 00:12:32.970 | 30.00th=[ 6063], 40.00th=[ 6456], 50.00th=[ 6915], 60.00th=[ 7111], 00:12:32.970 | 70.00th=[ 7832], 80.00th=[ 8979], 90.00th=[11338], 95.00th=[17171], 00:12:32.970 | 99.00th=[25297], 99.50th=[25297], 99.90th=[38536], 99.95th=[42206], 00:12:32.970 | 99.99th=[42206] 00:12:32.970 write: IOPS=8330, BW=32.5MiB/s (34.1MB/s)(32.9MiB/1011msec); 0 zone resets 00:12:32.970 slat (nsec): min=1618, max=9440.6k, avg=49671.30, stdev=385187.69 00:12:32.970 clat (usec): min=855, max=66871, avg=7627.99, stdev=6202.22 00:12:32.970 lat (usec): min=867, max=66873, avg=7677.66, stdev=6213.22 00:12:32.970 clat percentiles (usec): 00:12:32.970 | 1.00th=[ 1270], 5.00th=[ 2769], 10.00th=[ 3720], 20.00th=[ 4621], 00:12:32.970 | 30.00th=[ 5276], 40.00th=[ 5866], 50.00th=[ 6456], 60.00th=[ 6783], 00:12:32.970 | 70.00th=[ 7242], 80.00th=[ 8225], 90.00th=[11469], 95.00th=[17695], 00:12:32.970 | 99.00th=[36963], 99.50th=[43254], 99.90th=[59507], 99.95th=[66847], 00:12:32.970 | 99.99th=[66847] 00:12:32.970 bw ( KiB/s): min=30560, max=35800, per=34.93%, avg=33180.00, stdev=3705.24, samples=2 00:12:32.970 iops : min= 7640, max= 8950, avg=8295.00, stdev=926.31, samples=2 00:12:32.970 lat (usec) : 1000=0.19% 00:12:32.970 lat (msec) : 2=1.38%, 4=7.16%, 10=77.99%, 20=10.12%, 50=3.02% 00:12:32.970 lat (msec) : 100=0.14% 00:12:32.970 cpu : usr=5.94%, sys=9.50%, ctx=460, majf=0, minf=1 00:12:32.970 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:32.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:32.970 issued rwts: total=8192,8422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:32.970 job1: (groupid=0, jobs=1): err= 0: pid=2521927: Wed Nov 20 17:39:32 2024 00:12:32.970 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:12:32.970 slat (nsec): min=958, max=16816k, avg=138505.49, stdev=960540.72 00:12:32.970 clat (usec): min=4450, max=65899, avg=16004.32, stdev=12278.41 00:12:32.970 lat (usec): min=4459, max=65908, avg=16142.83, stdev=12393.48 00:12:32.970 clat percentiles (usec): 00:12:32.970 | 1.00th=[ 5211], 5.00th=[ 7111], 10.00th=[ 7439], 20.00th=[ 7963], 00:12:32.970 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[10945], 00:12:32.970 | 70.00th=[15008], 80.00th=[21627], 90.00th=[37487], 95.00th=[44827], 00:12:32.970 | 99.00th=[50070], 99.50th=[53740], 99.90th=[59507], 99.95th=[59507], 00:12:32.970 | 99.99th=[65799] 00:12:32.970 write: IOPS=3665, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1006msec); 0 zone resets 00:12:32.970 slat (nsec): min=1658, max=40906k, avg=131006.53, stdev=1091824.16 00:12:32.970 clat (usec): min=3778, max=86627, avg=16484.72, stdev=16080.77 00:12:32.970 lat (usec): min=4122, max=86654, avg=16615.73, stdev=16204.25 00:12:32.971 clat percentiles (usec): 00:12:32.971 | 1.00th=[ 5669], 5.00th=[ 6521], 10.00th=[ 6980], 20.00th=[ 7242], 00:12:32.971 | 30.00th=[ 7832], 40.00th=[ 8586], 50.00th=[ 9634], 60.00th=[11338], 00:12:32.971 | 70.00th=[14877], 80.00th=[19268], 90.00th=[39584], 95.00th=[60031], 00:12:32.971 | 99.00th=[79168], 99.50th=[81265], 99.90th=[85459], 99.95th=[85459], 00:12:32.971 | 99.99th=[86508] 00:12:32.971 bw ( KiB/s): min= 8192, max=20480, per=15.09%, avg=14336.00, stdev=8688.93, samples=2 00:12:32.971 iops : min= 2048, max= 5120, avg=3584.00, stdev=2172.23, samples=2 00:12:32.971 lat (msec) : 4=0.01%, 10=49.06%, 20=29.36%, 50=17.29%, 100=4.28% 00:12:32.971 cpu : usr=3.08%, sys=3.28%, ctx=337, majf=0, minf=2 00:12:32.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:32.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:32.971 issued rwts: total=3584,3687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:32.971 job2: (groupid=0, jobs=1): err= 0: pid=2521938: Wed Nov 20 17:39:32 2024 00:12:32.971 read: IOPS=6468, BW=25.3MiB/s (26.5MB/s)(25.4MiB/1005msec) 00:12:32.971 slat (nsec): min=934, max=14401k, avg=67509.49, stdev=531177.57 00:12:32.971 clat (usec): min=2883, max=23556, avg=9885.58, stdev=2681.79 00:12:32.971 lat (usec): min=2889, max=23563, avg=9953.09, stdev=2719.98 00:12:32.971 clat percentiles (usec): 00:12:32.971 | 1.00th=[ 4621], 5.00th=[ 6456], 10.00th=[ 7439], 20.00th=[ 7767], 00:12:32.971 | 30.00th=[ 8291], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[10028], 00:12:32.971 | 70.00th=[10683], 80.00th=[11600], 90.00th=[13829], 95.00th=[14746], 00:12:32.971 | 99.00th=[18744], 99.50th=[19792], 99.90th=[22938], 99.95th=[22938], 00:12:32.971 | 99.99th=[23462] 00:12:32.971 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:12:32.971 slat (nsec): min=1678, max=7912.6k, avg=65729.85, stdev=444914.04 00:12:32.971 clat (usec): min=738, max=28200, avg=9507.55, stdev=5287.51 00:12:32.971 lat (usec): min=751, max=28209, avg=9573.28, stdev=5325.51 00:12:32.971 clat percentiles (usec): 00:12:32.971 | 1.00th=[ 1696], 5.00th=[ 3621], 10.00th=[ 4686], 20.00th=[ 5997], 00:12:32.971 | 30.00th=[ 6521], 40.00th=[ 6718], 50.00th=[ 7373], 60.00th=[ 8160], 00:12:32.971 | 70.00th=[10814], 80.00th=[14222], 90.00th=[17695], 95.00th=[20579], 00:12:32.971 | 99.00th=[25297], 99.50th=[26084], 99.90th=[28181], 99.95th=[28181], 00:12:32.971 | 99.99th=[28181] 00:12:32.971 bw ( KiB/s): min=24576, max=28672, per=28.03%, avg=26624.00, stdev=2896.31, samples=2 00:12:32.971 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:12:32.971 lat (usec) : 750=0.02%, 1000=0.01% 00:12:32.971 lat (msec) : 2=0.70%, 4=2.76%, 10=59.90%, 20=33.47%, 50=3.14% 00:12:32.971 cpu : usr=4.48%, sys=8.17%, ctx=416, majf=0, minf=1 00:12:32.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:12:32.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:32.971 issued rwts: total=6501,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:32.971 job3: (groupid=0, jobs=1): err= 0: pid=2521945: Wed Nov 20 17:39:32 2024 00:12:32.971 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:12:32.971 slat (nsec): min=927, max=25494k, avg=94868.20, stdev=918555.16 00:12:32.971 clat (usec): min=2308, max=62386, avg=12554.33, stdev=9051.27 00:12:32.971 lat (usec): min=2579, max=62411, avg=12649.20, stdev=9143.23 00:12:32.971 clat percentiles (usec): 00:12:32.971 | 1.00th=[ 4948], 5.00th=[ 6849], 10.00th=[ 7177], 20.00th=[ 7701], 00:12:32.971 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9503], 00:12:32.971 | 70.00th=[11469], 80.00th=[15401], 90.00th=[20579], 95.00th=[40109], 00:12:32.971 | 99.00th=[44827], 99.50th=[45351], 99.90th=[49021], 99.95th=[57934], 00:12:32.971 | 99.99th=[62129] 00:12:32.971 write: IOPS=5194, BW=20.3MiB/s (21.3MB/s)(20.5MiB/1010msec); 0 zone resets 00:12:32.971 slat (nsec): min=1571, max=9853.1k, avg=89663.96, stdev=587339.31 00:12:32.971 clat (usec): min=419, max=77550, avg=12177.00, stdev=13168.70 00:12:32.971 lat (usec): min=551, max=77907, avg=12266.66, stdev=13265.14 00:12:32.971 clat percentiles (usec): 00:12:32.971 | 1.00th=[ 3589], 5.00th=[ 5080], 10.00th=[ 5997], 20.00th=[ 7242], 00:12:32.971 | 30.00th=[ 7701], 40.00th=[ 7963], 50.00th=[ 8455], 60.00th=[ 9110], 00:12:32.971 | 70.00th=[10421], 80.00th=[11469], 90.00th=[14877], 95.00th=[46924], 00:12:32.971 | 99.00th=[71828], 99.50th=[73925], 99.90th=[77071], 99.95th=[77071], 00:12:32.971 | 99.99th=[77071] 00:12:32.971 bw ( KiB/s): min=17208, max=23808, per=21.59%, avg=20508.00, stdev=4666.90, samples=2 00:12:32.971 iops : min= 4302, max= 5952, avg=5127.00, stdev=1166.73, samples=2 00:12:32.971 lat (usec) : 500=0.01% 00:12:32.971 lat (msec) : 2=0.18%, 4=1.06%, 10=62.56%, 20=26.72%, 50=6.96% 00:12:32.971 lat (msec) : 100=2.51% 00:12:32.971 cpu : usr=3.96%, sys=5.55%, ctx=344, majf=0, minf=2 00:12:32.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:32.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:32.971 issued rwts: total=5120,5246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:32.971 00:12:32.971 Run status group 0 (all jobs): 00:12:32.971 READ: bw=90.4MiB/s (94.8MB/s), 13.9MiB/s-31.7MiB/s (14.6MB/s-33.2MB/s), io=91.4MiB (95.8MB), run=1005-1011msec 00:12:32.971 WRITE: bw=92.8MiB/s (97.3MB/s), 14.3MiB/s-32.5MiB/s (15.0MB/s-34.1MB/s), io=93.8MiB (98.3MB), run=1005-1011msec 00:12:32.971 00:12:32.971 Disk stats (read/write): 00:12:32.971 nvme0n1: ios=7078/7169, merge=0/0, ticks=42381/40838, in_queue=83219, util=95.79% 00:12:32.971 nvme0n2: ios=2575/2695, merge=0/0, ticks=24360/21395, in_queue=45755, util=99.59% 00:12:32.971 nvme0n3: ios=5632/5919, merge=0/0, ticks=51800/47746, in_queue=99546, util=88.37% 00:12:32.971 nvme0n4: ios=3663/4096, merge=0/0, ticks=36888/41383, in_queue=78271, util=89.52% 00:12:32.971 17:39:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:32.971 [global] 00:12:32.971 thread=1 00:12:32.971 invalidate=1 00:12:32.971 rw=randwrite 00:12:32.971 time_based=1 00:12:32.971 runtime=1 00:12:32.971 ioengine=libaio 00:12:32.971 direct=1 00:12:32.971 bs=4096 00:12:32.971 iodepth=128 00:12:32.971 norandommap=0 00:12:32.971 numjobs=1 00:12:32.971 00:12:32.971 verify_dump=1 00:12:32.971 verify_backlog=512 00:12:32.971 verify_state_save=0 00:12:32.971 do_verify=1 00:12:32.971 verify=crc32c-intel 00:12:32.971 [job0] 00:12:32.971 filename=/dev/nvme0n1 00:12:32.971 [job1] 00:12:32.971 filename=/dev/nvme0n2 00:12:32.971 [job2] 00:12:32.971 filename=/dev/nvme0n3 00:12:32.971 [job3] 00:12:32.971 filename=/dev/nvme0n4 00:12:32.971 Could not set queue depth (nvme0n1) 00:12:32.971 Could not set queue depth (nvme0n2) 00:12:32.971 Could not set queue depth (nvme0n3) 00:12:32.971 Could not set queue depth (nvme0n4) 00:12:33.238 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:33.238 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:33.238 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:33.238 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:33.238 fio-3.35 00:12:33.238 Starting 4 threads 00:12:34.649 00:12:34.649 job0: (groupid=0, jobs=1): err= 0: pid=2522454: Wed Nov 20 17:39:34 2024 00:12:34.650 read: IOPS=8099, BW=31.6MiB/s (33.2MB/s)(31.8MiB/1006msec) 00:12:34.650 slat (nsec): min=957, max=7149.0k, avg=69936.45, stdev=538477.34 00:12:34.650 clat (usec): min=2370, max=15374, avg=8537.87, stdev=2048.77 00:12:34.650 lat (usec): min=2375, max=15389, avg=8607.81, stdev=2086.83 00:12:34.650 clat percentiles (usec): 00:12:34.650 | 1.00th=[ 3589], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7308], 00:12:34.650 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8225], 00:12:34.650 | 70.00th=[ 8586], 80.00th=[10159], 90.00th=[11731], 95.00th=[13042], 00:12:34.650 | 99.00th=[14353], 99.50th=[14484], 99.90th=[14877], 99.95th=[15008], 00:12:34.650 | 99.99th=[15401] 00:12:34.650 write: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1006msec); 0 zone resets 00:12:34.650 slat (nsec): min=1547, max=1840.7k, avg=48774.58, stdev=147290.81 00:12:34.650 clat (usec): min=1070, max=15029, avg=7080.43, stdev=1607.89 00:12:34.650 lat (usec): min=1080, max=15030, avg=7129.20, stdev=1617.92 00:12:34.650 clat percentiles (usec): 00:12:34.650 | 1.00th=[ 2343], 5.00th=[ 3458], 10.00th=[ 4490], 20.00th=[ 6194], 00:12:34.650 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7701], 60.00th=[ 7898], 00:12:34.650 | 70.00th=[ 8029], 80.00th=[ 8160], 90.00th=[ 8291], 95.00th=[ 8455], 00:12:34.650 | 99.00th=[ 8717], 99.50th=[ 9110], 99.90th=[14746], 99.95th=[14877], 00:12:34.650 | 99.99th=[15008] 00:12:34.650 bw ( KiB/s): min=32768, max=32768, per=29.67%, avg=32768.00, stdev= 0.00, samples=2 00:12:34.650 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:12:34.650 lat (msec) : 2=0.24%, 4=4.51%, 10=84.93%, 20=10.32% 00:12:34.650 cpu : usr=4.28%, sys=6.37%, ctx=1126, majf=0, minf=2 00:12:34.650 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:34.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:34.650 issued rwts: total=8148,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:34.650 job1: (groupid=0, jobs=1): err= 0: pid=2522458: Wed Nov 20 17:39:34 2024 00:12:34.650 read: IOPS=6483, BW=25.3MiB/s (26.6MB/s)(26.5MiB/1046msec) 00:12:34.650 slat (nsec): min=898, max=10255k, avg=69544.81, stdev=470273.22 00:12:34.650 clat (usec): min=4745, max=53275, avg=9645.27, stdev=5665.20 00:12:34.650 lat (usec): min=5191, max=53277, avg=9714.82, stdev=5683.71 00:12:34.650 clat percentiles (usec): 00:12:34.650 | 1.00th=[ 5866], 5.00th=[ 7111], 10.00th=[ 7898], 20.00th=[ 8160], 00:12:34.650 | 30.00th=[ 8225], 40.00th=[ 8356], 50.00th=[ 8455], 60.00th=[ 8586], 00:12:34.650 | 70.00th=[ 8848], 80.00th=[ 9503], 90.00th=[11076], 95.00th=[13435], 00:12:34.650 | 99.00th=[46400], 99.50th=[51119], 99.90th=[52691], 99.95th=[53216], 00:12:34.650 | 99.99th=[53216] 00:12:34.650 write: IOPS=6852, BW=26.8MiB/s (28.1MB/s)(28.0MiB/1046msec); 0 zone resets 00:12:34.650 slat (nsec): min=1505, max=15374k, avg=68985.07, stdev=476750.28 00:12:34.650 clat (usec): min=3980, max=53278, avg=9359.26, stdev=3741.17 00:12:34.650 lat (usec): min=4071, max=53286, avg=9428.25, stdev=3767.19 00:12:34.650 clat percentiles (usec): 00:12:34.650 | 1.00th=[ 4752], 5.00th=[ 6521], 10.00th=[ 7570], 20.00th=[ 7898], 00:12:34.650 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:12:34.650 | 70.00th=[ 8717], 80.00th=[ 9241], 90.00th=[12387], 95.00th=[17695], 00:12:34.650 | 99.00th=[26608], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:12:34.650 | 99.99th=[53216] 00:12:34.650 bw ( KiB/s): min=28656, max=28672, per=25.95%, avg=28664.00, stdev=11.31, samples=2 00:12:34.650 iops : min= 7164, max= 7168, avg=7166.00, stdev= 2.83, samples=2 00:12:34.650 lat (msec) : 4=0.01%, 10=83.30%, 20=14.05%, 50=2.25%, 100=0.39% 00:12:34.650 cpu : usr=4.59%, sys=6.70%, ctx=580, majf=0, minf=1 00:12:34.650 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:12:34.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:34.650 issued rwts: total=6782,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:34.650 job2: (groupid=0, jobs=1): err= 0: pid=2522473: Wed Nov 20 17:39:34 2024 00:12:34.650 read: IOPS=6823, BW=26.7MiB/s (27.9MB/s)(27.9MiB/1047msec) 00:12:34.650 slat (nsec): min=1055, max=8384.5k, avg=76877.63, stdev=582721.65 00:12:34.650 clat (usec): min=3667, max=50578, avg=10386.26, stdev=5775.26 00:12:34.650 lat (usec): min=3676, max=58963, avg=10463.14, stdev=5799.31 00:12:34.650 clat percentiles (usec): 00:12:34.650 | 1.00th=[ 4228], 5.00th=[ 6521], 10.00th=[ 7111], 20.00th=[ 8160], 00:12:34.650 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9372], 00:12:34.650 | 70.00th=[10290], 80.00th=[11469], 90.00th=[13566], 95.00th=[15139], 00:12:34.650 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50594], 99.95th=[50594], 00:12:34.650 | 99.99th=[50594] 00:12:34.650 write: IOPS=6846, BW=26.7MiB/s (28.0MB/s)(28.0MiB/1047msec); 0 zone resets 00:12:34.650 slat (nsec): min=1660, max=8516.2k, avg=55311.78, stdev=350923.43 00:12:34.650 clat (usec): min=805, max=24800, avg=8177.91, stdev=2254.25 00:12:34.650 lat (usec): min=830, max=24809, avg=8233.22, stdev=2271.17 00:12:34.650 clat percentiles (usec): 00:12:34.650 | 1.00th=[ 2900], 5.00th=[ 4359], 10.00th=[ 5276], 20.00th=[ 6194], 00:12:34.650 | 30.00th=[ 7570], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 8979], 00:12:34.650 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[11338], 00:12:34.650 | 99.00th=[14484], 99.50th=[18482], 99.90th=[23462], 99.95th=[23462], 00:12:34.650 | 99.99th=[24773] 00:12:34.650 bw ( KiB/s): min=28672, max=28672, per=25.96%, avg=28672.00, stdev= 0.00, samples=2 00:12:34.650 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=2 00:12:34.650 lat (usec) : 1000=0.08% 00:12:34.650 lat (msec) : 2=0.12%, 4=1.75%, 10=78.67%, 20=18.38%, 50=0.63% 00:12:34.650 lat (msec) : 100=0.38% 00:12:34.650 cpu : usr=4.40%, sys=7.74%, ctx=670, majf=0, minf=1 00:12:34.650 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:34.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:34.650 issued rwts: total=7144,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:34.650 job3: (groupid=0, jobs=1): err= 0: pid=2522480: Wed Nov 20 17:39:34 2024 00:12:34.650 read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec) 00:12:34.650 slat (nsec): min=938, max=10589k, avg=82726.99, stdev=623370.07 00:12:34.650 clat (usec): min=3025, max=25463, avg=10611.30, stdev=2697.01 00:12:34.650 lat (usec): min=3031, max=25465, avg=10694.02, stdev=2737.62 00:12:34.650 clat percentiles (usec): 00:12:34.650 | 1.00th=[ 4555], 5.00th=[ 7504], 10.00th=[ 8356], 20.00th=[ 8848], 00:12:34.650 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10290], 00:12:34.650 | 70.00th=[11076], 80.00th=[12649], 90.00th=[14353], 95.00th=[15795], 00:12:34.650 | 99.00th=[19006], 99.50th=[21365], 99.90th=[24511], 99.95th=[25560], 00:12:34.650 | 99.99th=[25560] 00:12:34.650 write: IOPS=6340, BW=24.8MiB/s (26.0MB/s)(24.9MiB/1006msec); 0 zone resets 00:12:34.650 slat (nsec): min=1574, max=8454.9k, avg=72354.94, stdev=450316.95 00:12:34.650 clat (usec): min=1126, max=25461, avg=9787.80, stdev=3513.75 00:12:34.650 lat (usec): min=1137, max=25463, avg=9860.16, stdev=3546.48 00:12:34.650 clat percentiles (usec): 00:12:34.650 | 1.00th=[ 3294], 5.00th=[ 5342], 10.00th=[ 6325], 20.00th=[ 7701], 00:12:34.650 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:12:34.650 | 70.00th=[10028], 80.00th=[10552], 90.00th=[14222], 95.00th=[17695], 00:12:34.650 | 99.00th=[21890], 99.50th=[23462], 99.90th=[24511], 99.95th=[24511], 00:12:34.650 | 99.99th=[25560] 00:12:34.650 bw ( KiB/s): min=23272, max=26736, per=22.64%, avg=25004.00, stdev=2449.42, samples=2 00:12:34.650 iops : min= 5818, max= 6684, avg=6251.00, stdev=612.35, samples=2 00:12:34.650 lat (msec) : 2=0.02%, 4=1.23%, 10=60.32%, 20=36.19%, 50=2.24% 00:12:34.650 cpu : usr=4.38%, sys=5.87%, ctx=589, majf=0, minf=1 00:12:34.650 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:34.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:34.650 issued rwts: total=6144,6379,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:34.650 00:12:34.650 Run status group 0 (all jobs): 00:12:34.650 READ: bw=105MiB/s (110MB/s), 23.9MiB/s-31.6MiB/s (25.0MB/s-33.2MB/s), io=110MiB (116MB), run=1006-1047msec 00:12:34.650 WRITE: bw=108MiB/s (113MB/s), 24.8MiB/s-31.8MiB/s (26.0MB/s-33.4MB/s), io=113MiB (118MB), run=1006-1047msec 00:12:34.650 00:12:34.650 Disk stats (read/write): 00:12:34.650 nvme0n1: ios=6706/6983, merge=0/0, ticks=53937/47557, in_queue=101494, util=86.97% 00:12:34.650 nvme0n2: ios=5655/5646, merge=0/0, ticks=24440/25782, in_queue=50222, util=86.18% 00:12:34.650 nvme0n3: ios=5796/6144, merge=0/0, ticks=52808/48201, in_queue=101009, util=98.41% 00:12:34.650 nvme0n4: ios=4859/5120, merge=0/0, ticks=50939/50462, in_queue=101401, util=89.38% 00:12:34.650 17:39:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:34.650 17:39:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2522769 00:12:34.650 17:39:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:34.650 17:39:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:34.650 [global] 00:12:34.650 thread=1 00:12:34.650 invalidate=1 00:12:34.650 rw=read 00:12:34.650 time_based=1 00:12:34.650 runtime=10 00:12:34.650 ioengine=libaio 00:12:34.650 direct=1 00:12:34.650 bs=4096 00:12:34.650 iodepth=1 00:12:34.650 norandommap=1 00:12:34.650 numjobs=1 00:12:34.650 00:12:34.650 [job0] 00:12:34.650 filename=/dev/nvme0n1 00:12:34.650 [job1] 00:12:34.650 filename=/dev/nvme0n2 00:12:34.650 [job2] 00:12:34.650 filename=/dev/nvme0n3 00:12:34.650 [job3] 00:12:34.650 filename=/dev/nvme0n4 00:12:34.650 Could not set queue depth (nvme0n1) 00:12:34.650 Could not set queue depth (nvme0n2) 00:12:34.650 Could not set queue depth (nvme0n3) 00:12:34.651 Could not set queue depth (nvme0n4) 00:12:34.917 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:34.917 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:34.917 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:34.917 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:34.917 fio-3.35 00:12:34.917 Starting 4 threads 00:12:37.462 17:39:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:37.723 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=4706304, buflen=4096 00:12:37.723 fio: pid=2523007, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:37.723 17:39:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:37.723 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=278528, buflen=4096 00:12:37.723 fio: pid=2523000, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:37.723 17:39:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:37.723 17:39:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:37.984 17:39:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:37.984 17:39:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:37.984 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=290816, buflen=4096 00:12:37.984 fio: pid=2522981, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:38.245 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=307200, buflen=4096 00:12:38.245 fio: pid=2522985, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:38.245 17:39:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:38.245 17:39:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:38.245 00:12:38.245 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2522981: Wed Nov 20 17:39:38 2024 00:12:38.245 read: IOPS=24, BW=96.1KiB/s (98.4kB/s)(284KiB/2956msec) 00:12:38.245 slat (usec): min=26, max=10669, avg=174.75, stdev=1254.26 00:12:38.245 clat (usec): min=994, max=42980, avg=41134.10, stdev=4852.24 00:12:38.245 lat (usec): min=1032, max=52100, avg=41310.94, stdev=5021.59 00:12:38.245 clat percentiles (usec): 00:12:38.245 | 1.00th=[ 996], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:12:38.245 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:12:38.245 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:38.245 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:12:38.245 | 99.99th=[42730] 00:12:38.245 bw ( KiB/s): min= 96, max= 96, per=5.52%, avg=96.00, stdev= 0.00, samples=5 00:12:38.245 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:12:38.245 lat (usec) : 1000=1.39% 00:12:38.245 lat (msec) : 50=97.22% 00:12:38.245 cpu : usr=0.14%, sys=0.00%, ctx=73, majf=0, minf=1 00:12:38.245 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:38.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.245 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.245 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.245 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:38.245 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2522985: Wed Nov 20 17:39:38 2024 00:12:38.245 read: IOPS=24, BW=95.7KiB/s (98.0kB/s)(300KiB/3134msec) 00:12:38.245 slat (usec): min=26, max=9640, avg=254.11, stdev=1393.94 00:12:38.245 clat (usec): min=1033, max=43039, avg=41224.07, stdev=4723.25 00:12:38.245 lat (usec): min=1073, max=48949, avg=41353.04, stdev=4804.80 00:12:38.245 clat percentiles (usec): 00:12:38.245 | 1.00th=[ 1037], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:12:38.245 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:12:38.245 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:38.245 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:12:38.245 | 99.99th=[43254] 00:12:38.245 bw ( KiB/s): min= 96, max= 96, per=5.52%, avg=96.00, stdev= 0.00, samples=6 00:12:38.245 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=6 00:12:38.245 lat (msec) : 2=1.32%, 50=97.37% 00:12:38.245 cpu : usr=0.00%, sys=0.16%, ctx=80, majf=0, minf=2 00:12:38.245 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:38.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.245 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.245 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.245 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:38.245 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2523000: Wed Nov 20 17:39:38 2024 00:12:38.245 read: IOPS=24, BW=98.2KiB/s (101kB/s)(272KiB/2770msec) 00:12:38.245 slat (nsec): min=27004, max=71320, avg=28639.13, stdev=6640.90 00:12:38.245 clat (usec): min=665, max=41572, avg=40380.73, stdev=4888.71 00:12:38.245 lat (usec): min=727, max=41599, avg=40409.39, stdev=4884.65 00:12:38.245 clat percentiles (usec): 00:12:38.245 | 1.00th=[ 668], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:12:38.245 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:38.245 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:38.245 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:12:38.245 | 99.99th=[41681] 00:12:38.245 bw ( KiB/s): min= 96, max= 104, per=5.69%, avg=99.20, stdev= 4.38, samples=5 00:12:38.245 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:12:38.245 lat (usec) : 750=1.45% 00:12:38.245 lat (msec) : 50=97.10% 00:12:38.245 cpu : usr=0.00%, sys=0.14%, ctx=71, majf=0, minf=2 00:12:38.245 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:38.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.245 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.245 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.245 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:38.245 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2523007: Wed Nov 20 17:39:38 2024 00:12:38.245 read: IOPS=444, BW=1777KiB/s (1820kB/s)(4596KiB/2586msec) 00:12:38.245 slat (nsec): min=8686, max=63830, avg=27636.85, stdev=4164.21 00:12:38.245 clat (usec): min=760, max=42078, avg=2192.97, stdev=6590.78 00:12:38.245 lat (usec): min=788, max=42105, avg=2220.60, stdev=6590.84 00:12:38.245 clat percentiles (usec): 00:12:38.245 | 1.00th=[ 898], 5.00th=[ 955], 10.00th=[ 988], 20.00th=[ 1037], 00:12:38.245 | 30.00th=[ 1057], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:12:38.245 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:12:38.245 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:12:38.245 | 99.99th=[42206] 00:12:38.245 bw ( KiB/s): min= 592, max= 3656, per=100.00%, avg=1756.80, stdev=1330.53, samples=5 00:12:38.245 iops : min= 148, max= 914, avg=439.20, stdev=332.63, samples=5 00:12:38.245 lat (usec) : 1000=11.74% 00:12:38.245 lat (msec) : 2=85.30%, 4=0.09%, 50=2.78% 00:12:38.245 cpu : usr=0.50%, sys=1.43%, ctx=1153, majf=0, minf=2 00:12:38.245 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:38.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.245 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.245 issued rwts: total=1150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.245 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:38.245 00:12:38.245 Run status group 0 (all jobs): 00:12:38.245 READ: bw=1740KiB/s (1781kB/s), 95.7KiB/s-1777KiB/s (98.0kB/s-1820kB/s), io=5452KiB (5583kB), run=2586-3134msec 00:12:38.245 00:12:38.245 Disk stats (read/write): 00:12:38.245 nvme0n1: ios=68/0, merge=0/0, ticks=2796/0, in_queue=2796, util=94.36% 00:12:38.245 nvme0n2: ios=105/0, merge=0/0, ticks=3539/0, in_queue=3539, util=99.26% 00:12:38.245 nvme0n3: ios=64/0, merge=0/0, ticks=2584/0, in_queue=2584, util=95.99% 00:12:38.245 nvme0n4: ios=983/0, merge=0/0, ticks=3061/0, in_queue=3061, util=99.26% 00:12:38.245 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:38.245 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:38.506 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:38.506 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:38.766 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:38.766 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:39.026 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:39.026 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:39.026 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:39.026 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2522769 00:12:39.026 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:39.026 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.287 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.287 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:39.287 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:39.287 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.287 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:39.287 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.287 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:39.287 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:39.287 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:39.287 nvmf hotplug test: fio failed as expected 00:12:39.287 17:39:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.287 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:39.287 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:39.287 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:39.287 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:39.287 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:39.287 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:39.287 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:39.548 rmmod nvme_tcp 00:12:39.548 rmmod nvme_fabrics 00:12:39.548 rmmod nvme_keyring 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 2519105 ']' 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 2519105 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2519105 ']' 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2519105 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2519105 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2519105' 00:12:39.548 killing process with pid 2519105 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2519105 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2519105 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.548 17:39:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:42.101 00:12:42.101 real 0m29.546s 00:12:42.101 user 2m37.519s 00:12:42.101 sys 0m9.498s 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.101 ************************************ 00:12:42.101 END TEST nvmf_fio_target 00:12:42.101 ************************************ 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:42.101 ************************************ 00:12:42.101 START TEST nvmf_bdevio 00:12:42.101 ************************************ 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:42.101 * Looking for test storage... 00:12:42.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:42.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.101 --rc genhtml_branch_coverage=1 00:12:42.101 --rc genhtml_function_coverage=1 00:12:42.101 --rc genhtml_legend=1 00:12:42.101 --rc geninfo_all_blocks=1 00:12:42.101 --rc geninfo_unexecuted_blocks=1 00:12:42.101 00:12:42.101 ' 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:42.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.101 --rc genhtml_branch_coverage=1 00:12:42.101 --rc genhtml_function_coverage=1 00:12:42.101 --rc genhtml_legend=1 00:12:42.101 --rc geninfo_all_blocks=1 00:12:42.101 --rc geninfo_unexecuted_blocks=1 00:12:42.101 00:12:42.101 ' 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:42.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.101 --rc genhtml_branch_coverage=1 00:12:42.101 --rc genhtml_function_coverage=1 00:12:42.101 --rc genhtml_legend=1 00:12:42.101 --rc geninfo_all_blocks=1 00:12:42.101 --rc geninfo_unexecuted_blocks=1 00:12:42.101 00:12:42.101 ' 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:42.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.101 --rc genhtml_branch_coverage=1 00:12:42.101 --rc genhtml_function_coverage=1 00:12:42.101 --rc genhtml_legend=1 00:12:42.101 --rc geninfo_all_blocks=1 00:12:42.101 --rc geninfo_unexecuted_blocks=1 00:12:42.101 00:12:42.101 ' 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.101 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:42.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:42.102 17:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.243 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:50.243 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:50.243 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:50.243 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:50.243 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:50.243 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:50.243 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:50.243 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:50.243 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:50.243 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:50.243 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:50.243 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:50.243 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:50.243 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:50.243 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:50.243 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.243 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:50.244 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:50.244 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:50.244 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:50.244 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:50.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:12:50.244 00:12:50.244 --- 10.0.0.2 ping statistics --- 00:12:50.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.244 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:50.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:12:50.244 00:12:50.244 --- 10.0.0.1 ping statistics --- 00:12:50.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.244 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=2528313 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 2528313 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2528313 ']' 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.244 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:50.245 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.245 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:50.245 17:39:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.245 [2024-11-20 17:39:49.463682] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:50.245 [2024-11-20 17:39:49.463750] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.245 [2024-11-20 17:39:49.552324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.245 [2024-11-20 17:39:49.599617] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.245 [2024-11-20 17:39:49.599666] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.245 [2024-11-20 17:39:49.599675] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.245 [2024-11-20 17:39:49.599682] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.245 [2024-11-20 17:39:49.599687] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.245 [2024-11-20 17:39:49.599840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:12:50.245 [2024-11-20 17:39:49.599992] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:12:50.245 [2024-11-20 17:39:49.600150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:50.245 [2024-11-20 17:39:49.600151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.506 [2024-11-20 17:39:50.341210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.506 Malloc0 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:50.506 [2024-11-20 17:39:50.407015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:12:50.506 { 00:12:50.506 "params": { 00:12:50.506 "name": "Nvme$subsystem", 00:12:50.506 "trtype": "$TEST_TRANSPORT", 00:12:50.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:50.506 "adrfam": "ipv4", 00:12:50.506 "trsvcid": "$NVMF_PORT", 00:12:50.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:50.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:50.506 "hdgst": ${hdgst:-false}, 00:12:50.506 "ddgst": ${ddgst:-false} 00:12:50.506 }, 00:12:50.506 "method": "bdev_nvme_attach_controller" 00:12:50.506 } 00:12:50.506 EOF 00:12:50.506 )") 00:12:50.506 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:12:50.767 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:12:50.767 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:12:50.767 17:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:12:50.767 "params": { 00:12:50.767 "name": "Nvme1", 00:12:50.767 "trtype": "tcp", 00:12:50.767 "traddr": "10.0.0.2", 00:12:50.767 "adrfam": "ipv4", 00:12:50.767 "trsvcid": "4420", 00:12:50.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:50.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:50.767 "hdgst": false, 00:12:50.767 "ddgst": false 00:12:50.767 }, 00:12:50.767 "method": "bdev_nvme_attach_controller" 00:12:50.767 }' 00:12:50.767 [2024-11-20 17:39:50.463991] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:50.767 [2024-11-20 17:39:50.464056] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528406 ] 00:12:50.767 [2024-11-20 17:39:50.544685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:50.767 [2024-11-20 17:39:50.593984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.767 [2024-11-20 17:39:50.594144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.767 [2024-11-20 17:39:50.594144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.028 I/O targets: 00:12:51.028 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:51.028 00:12:51.028 00:12:51.028 CUnit - A unit testing framework for C - Version 2.1-3 00:12:51.028 http://cunit.sourceforge.net/ 00:12:51.028 00:12:51.028 00:12:51.028 Suite: bdevio tests on: Nvme1n1 00:12:51.289 Test: blockdev write read block ...passed 00:12:51.289 Test: blockdev write zeroes read block ...passed 00:12:51.289 Test: blockdev write zeroes read no split ...passed 00:12:51.289 Test: blockdev write zeroes read split ...passed 00:12:51.289 Test: blockdev write zeroes read split partial ...passed 00:12:51.289 Test: blockdev reset ...[2024-11-20 17:39:51.083857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:51.289 [2024-11-20 17:39:51.083967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1482340 (9): Bad file descriptor 00:12:51.289 [2024-11-20 17:39:51.097516] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:51.289 passed 00:12:51.289 Test: blockdev write read 8 blocks ...passed 00:12:51.289 Test: blockdev write read size > 128k ...passed 00:12:51.289 Test: blockdev write read invalid size ...passed 00:12:51.289 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.289 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.289 Test: blockdev write read max offset ...passed 00:12:51.550 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.550 Test: blockdev writev readv 8 blocks ...passed 00:12:51.550 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.550 Test: blockdev writev readv block ...passed 00:12:51.550 Test: blockdev writev readv size > 128k ...passed 00:12:51.550 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.550 Test: blockdev comparev and writev ...[2024-11-20 17:39:51.363398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.550 [2024-11-20 17:39:51.363454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:51.550 [2024-11-20 17:39:51.363472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.550 [2024-11-20 17:39:51.363481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:51.550 [2024-11-20 17:39:51.363967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.550 [2024-11-20 17:39:51.363981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:51.550 [2024-11-20 17:39:51.363996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.550 [2024-11-20 17:39:51.364005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:51.550 [2024-11-20 17:39:51.364379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.550 [2024-11-20 17:39:51.364391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:51.550 [2024-11-20 17:39:51.364406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.550 [2024-11-20 17:39:51.364415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:51.550 [2024-11-20 17:39:51.364806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.550 [2024-11-20 17:39:51.364818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:51.550 [2024-11-20 17:39:51.364838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.550 [2024-11-20 17:39:51.364847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:51.550 passed 00:12:51.550 Test: blockdev nvme passthru rw ...passed 00:12:51.550 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:39:51.449065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:51.550 [2024-11-20 17:39:51.449084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:51.550 [2024-11-20 17:39:51.449482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:51.550 [2024-11-20 17:39:51.449494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:51.550 [2024-11-20 17:39:51.449874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:51.550 [2024-11-20 17:39:51.449885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:51.550 [2024-11-20 17:39:51.450259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:51.550 [2024-11-20 17:39:51.450270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:51.550 passed 00:12:51.810 Test: blockdev nvme admin passthru ...passed 00:12:51.810 Test: blockdev copy ...passed 00:12:51.810 00:12:51.810 Run Summary: Type Total Ran Passed Failed Inactive 00:12:51.810 suites 1 1 n/a 0 0 00:12:51.810 tests 23 23 23 0 0 00:12:51.810 asserts 152 152 152 0 n/a 00:12:51.810 00:12:51.810 Elapsed time = 1.207 seconds 00:12:51.810 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.810 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.810 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.810 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.810 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:51.810 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:51.810 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:51.810 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:51.810 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:51.810 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:51.810 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:51.810 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:51.810 rmmod nvme_tcp 00:12:51.810 rmmod nvme_fabrics 00:12:51.810 rmmod nvme_keyring 00:12:51.811 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:51.811 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:51.811 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:51.811 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 2528313 ']' 00:12:51.811 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 2528313 00:12:51.811 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2528313 ']' 00:12:51.811 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2528313 00:12:51.811 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:12:51.811 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:51.811 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2528313 00:12:52.071 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:52.071 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:52.071 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2528313' 00:12:52.071 killing process with pid 2528313 00:12:52.071 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2528313 00:12:52.071 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2528313 00:12:52.071 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:52.071 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:52.071 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:52.071 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:52.071 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:12:52.071 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:52.071 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:12:52.071 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:52.071 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:52.071 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.071 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.071 17:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.617 17:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:54.617 00:12:54.617 real 0m12.369s 00:12:54.617 user 0m13.855s 00:12:54.617 sys 0m6.259s 00:12:54.617 17:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.617 17:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:54.617 ************************************ 00:12:54.617 END TEST nvmf_bdevio 00:12:54.617 ************************************ 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:54.617 00:12:54.617 real 5m6.324s 00:12:54.617 user 11m53.788s 00:12:54.617 sys 1m52.631s 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:54.617 ************************************ 00:12:54.617 END TEST nvmf_target_core 00:12:54.617 ************************************ 00:12:54.617 17:39:54 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:54.617 17:39:54 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:54.617 17:39:54 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.617 17:39:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:54.617 ************************************ 00:12:54.617 START TEST nvmf_target_extra 00:12:54.617 ************************************ 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:54.617 * Looking for test storage... 00:12:54.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:54.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.617 --rc genhtml_branch_coverage=1 00:12:54.617 --rc genhtml_function_coverage=1 00:12:54.617 --rc genhtml_legend=1 00:12:54.617 --rc geninfo_all_blocks=1 00:12:54.617 --rc geninfo_unexecuted_blocks=1 00:12:54.617 00:12:54.617 ' 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:54.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.617 --rc genhtml_branch_coverage=1 00:12:54.617 --rc genhtml_function_coverage=1 00:12:54.617 --rc genhtml_legend=1 00:12:54.617 --rc geninfo_all_blocks=1 00:12:54.617 --rc geninfo_unexecuted_blocks=1 00:12:54.617 00:12:54.617 ' 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:54.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.617 --rc genhtml_branch_coverage=1 00:12:54.617 --rc genhtml_function_coverage=1 00:12:54.617 --rc genhtml_legend=1 00:12:54.617 --rc geninfo_all_blocks=1 00:12:54.617 --rc geninfo_unexecuted_blocks=1 00:12:54.617 00:12:54.617 ' 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:54.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.617 --rc genhtml_branch_coverage=1 00:12:54.617 --rc genhtml_function_coverage=1 00:12:54.617 --rc genhtml_legend=1 00:12:54.617 --rc geninfo_all_blocks=1 00:12:54.617 --rc geninfo_unexecuted_blocks=1 00:12:54.617 00:12:54.617 ' 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.617 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:54.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:54.618 ************************************ 00:12:54.618 START TEST nvmf_example 00:12:54.618 ************************************ 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:54.618 * Looking for test storage... 00:12:54.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:12:54.618 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.880 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:54.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.881 --rc genhtml_branch_coverage=1 00:12:54.881 --rc genhtml_function_coverage=1 00:12:54.881 --rc genhtml_legend=1 00:12:54.881 --rc geninfo_all_blocks=1 00:12:54.881 --rc geninfo_unexecuted_blocks=1 00:12:54.881 00:12:54.881 ' 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:54.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.881 --rc genhtml_branch_coverage=1 00:12:54.881 --rc genhtml_function_coverage=1 00:12:54.881 --rc genhtml_legend=1 00:12:54.881 --rc geninfo_all_blocks=1 00:12:54.881 --rc geninfo_unexecuted_blocks=1 00:12:54.881 00:12:54.881 ' 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:54.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.881 --rc genhtml_branch_coverage=1 00:12:54.881 --rc genhtml_function_coverage=1 00:12:54.881 --rc genhtml_legend=1 00:12:54.881 --rc geninfo_all_blocks=1 00:12:54.881 --rc geninfo_unexecuted_blocks=1 00:12:54.881 00:12:54.881 ' 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:54.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.881 --rc genhtml_branch_coverage=1 00:12:54.881 --rc genhtml_function_coverage=1 00:12:54.881 --rc genhtml_legend=1 00:12:54.881 --rc geninfo_all_blocks=1 00:12:54.881 --rc geninfo_unexecuted_blocks=1 00:12:54.881 00:12:54.881 ' 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:54.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:54.881 17:39:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:03.028 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:03.028 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.028 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:03.029 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:03.029 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # is_hw=yes 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:03.029 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:03.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:13:03.029 00:13:03.029 --- 10.0.0.2 ping statistics --- 00:13:03.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.029 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:13:03.029 00:13:03.029 --- 10.0.0.1 ping statistics --- 00:13:03.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.029 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # return 0 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2533070 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2533070 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2533070 ']' 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:03.029 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.290 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.291 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:03.291 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:03.291 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.291 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.291 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.291 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.291 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.291 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:03.291 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.291 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:03.291 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:15.525 Initializing NVMe Controllers 00:13:15.525 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:15.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:15.525 Initialization complete. Launching workers. 00:13:15.525 ======================================================== 00:13:15.525 Latency(us) 00:13:15.525 Device Information : IOPS MiB/s Average min max 00:13:15.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18907.49 73.86 3384.43 623.17 15593.82 00:13:15.525 ======================================================== 00:13:15.525 Total : 18907.49 73.86 3384.43 623.17 15593.82 00:13:15.525 00:13:15.525 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:15.525 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:15.525 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:15.525 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:13:15.525 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:15.525 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:13:15.525 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:15.525 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:15.525 rmmod nvme_tcp 00:13:15.525 rmmod nvme_fabrics 00:13:15.525 rmmod nvme_keyring 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 2533070 ']' 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 2533070 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2533070 ']' 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2533070 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2533070 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2533070' 00:13:15.526 killing process with pid 2533070 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2533070 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2533070 00:13:15.526 nvmf threads initialize successfully 00:13:15.526 bdev subsystem init successfully 00:13:15.526 created a nvmf target service 00:13:15.526 create targets's poll groups done 00:13:15.526 all subsystems of target started 00:13:15.526 nvmf target is running 00:13:15.526 all subsystems of target stopped 00:13:15.526 destroy targets's poll groups done 00:13:15.526 destroyed the nvmf target service 00:13:15.526 bdev subsystem finish successfully 00:13:15.526 nvmf threads destroy successfully 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.526 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.786 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:15.786 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:15.786 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:15.786 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:16.047 00:13:16.047 real 0m21.325s 00:13:16.047 user 0m46.389s 00:13:16.047 sys 0m6.987s 00:13:16.047 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.047 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:16.047 ************************************ 00:13:16.047 END TEST nvmf_example 00:13:16.047 ************************************ 00:13:16.047 17:40:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:16.047 17:40:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:16.047 17:40:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.047 17:40:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:16.047 ************************************ 00:13:16.047 START TEST nvmf_filesystem 00:13:16.047 ************************************ 00:13:16.047 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:16.047 * Looking for test storage... 00:13:16.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.047 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:16.047 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:13:16.047 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:16.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.312 --rc genhtml_branch_coverage=1 00:13:16.312 --rc genhtml_function_coverage=1 00:13:16.312 --rc genhtml_legend=1 00:13:16.312 --rc geninfo_all_blocks=1 00:13:16.312 --rc geninfo_unexecuted_blocks=1 00:13:16.312 00:13:16.312 ' 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:16.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.312 --rc genhtml_branch_coverage=1 00:13:16.312 --rc genhtml_function_coverage=1 00:13:16.312 --rc genhtml_legend=1 00:13:16.312 --rc geninfo_all_blocks=1 00:13:16.312 --rc geninfo_unexecuted_blocks=1 00:13:16.312 00:13:16.312 ' 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:16.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.312 --rc genhtml_branch_coverage=1 00:13:16.312 --rc genhtml_function_coverage=1 00:13:16.312 --rc genhtml_legend=1 00:13:16.312 --rc geninfo_all_blocks=1 00:13:16.312 --rc geninfo_unexecuted_blocks=1 00:13:16.312 00:13:16.312 ' 00:13:16.312 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:16.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.312 --rc genhtml_branch_coverage=1 00:13:16.312 --rc genhtml_function_coverage=1 00:13:16.312 --rc genhtml_legend=1 00:13:16.312 --rc geninfo_all_blocks=1 00:13:16.312 --rc geninfo_unexecuted_blocks=1 00:13:16.312 00:13:16.312 ' 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:13:16.313 17:40:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:16.313 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:16.314 #define SPDK_CONFIG_H 00:13:16.314 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:16.314 #define SPDK_CONFIG_APPS 1 00:13:16.314 #define SPDK_CONFIG_ARCH native 00:13:16.314 #undef SPDK_CONFIG_ASAN 00:13:16.314 #undef SPDK_CONFIG_AVAHI 00:13:16.314 #undef SPDK_CONFIG_CET 00:13:16.314 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:16.314 #define SPDK_CONFIG_COVERAGE 1 00:13:16.314 #define SPDK_CONFIG_CROSS_PREFIX 00:13:16.314 #undef SPDK_CONFIG_CRYPTO 00:13:16.314 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:16.314 #undef SPDK_CONFIG_CUSTOMOCF 00:13:16.314 #undef SPDK_CONFIG_DAOS 00:13:16.314 #define SPDK_CONFIG_DAOS_DIR 00:13:16.314 #define SPDK_CONFIG_DEBUG 1 00:13:16.314 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:16.314 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:13:16.314 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:13:16.314 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:13:16.314 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:16.314 #undef SPDK_CONFIG_DPDK_UADK 00:13:16.314 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:16.314 #define SPDK_CONFIG_EXAMPLES 1 00:13:16.314 #undef SPDK_CONFIG_FC 00:13:16.314 #define SPDK_CONFIG_FC_PATH 00:13:16.314 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:16.314 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:16.314 #define SPDK_CONFIG_FSDEV 1 00:13:16.314 #undef SPDK_CONFIG_FUSE 00:13:16.314 #undef SPDK_CONFIG_FUZZER 00:13:16.314 #define SPDK_CONFIG_FUZZER_LIB 00:13:16.314 #undef SPDK_CONFIG_GOLANG 00:13:16.314 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:16.314 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:16.314 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:16.314 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:16.314 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:16.314 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:16.314 #undef SPDK_CONFIG_HAVE_LZ4 00:13:16.314 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:16.314 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:16.314 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:16.314 #define SPDK_CONFIG_IDXD 1 00:13:16.314 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:16.314 #undef SPDK_CONFIG_IPSEC_MB 00:13:16.314 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:16.314 #define SPDK_CONFIG_ISAL 1 00:13:16.314 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:16.314 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:16.314 #define SPDK_CONFIG_LIBDIR 00:13:16.314 #undef SPDK_CONFIG_LTO 00:13:16.314 #define SPDK_CONFIG_MAX_LCORES 128 00:13:16.314 #define SPDK_CONFIG_NVME_CUSE 1 00:13:16.314 #undef SPDK_CONFIG_OCF 00:13:16.314 #define SPDK_CONFIG_OCF_PATH 00:13:16.314 #define SPDK_CONFIG_OPENSSL_PATH 00:13:16.314 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:16.314 #define SPDK_CONFIG_PGO_DIR 00:13:16.314 #undef SPDK_CONFIG_PGO_USE 00:13:16.314 #define SPDK_CONFIG_PREFIX /usr/local 00:13:16.314 #undef SPDK_CONFIG_RAID5F 00:13:16.314 #undef SPDK_CONFIG_RBD 00:13:16.314 #define SPDK_CONFIG_RDMA 1 00:13:16.314 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:16.314 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:16.314 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:16.314 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:16.314 #define SPDK_CONFIG_SHARED 1 00:13:16.314 #undef SPDK_CONFIG_SMA 00:13:16.314 #define SPDK_CONFIG_TESTS 1 00:13:16.314 #undef SPDK_CONFIG_TSAN 00:13:16.314 #define SPDK_CONFIG_UBLK 1 00:13:16.314 #define SPDK_CONFIG_UBSAN 1 00:13:16.314 #undef SPDK_CONFIG_UNIT_TESTS 00:13:16.314 #undef SPDK_CONFIG_URING 00:13:16.314 #define SPDK_CONFIG_URING_PATH 00:13:16.314 #undef SPDK_CONFIG_URING_ZNS 00:13:16.314 #undef SPDK_CONFIG_USDT 00:13:16.314 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:16.314 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:16.314 #define SPDK_CONFIG_VFIO_USER 1 00:13:16.314 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:16.314 #define SPDK_CONFIG_VHOST 1 00:13:16.314 #define SPDK_CONFIG_VIRTIO 1 00:13:16.314 #undef SPDK_CONFIG_VTUNE 00:13:16.314 #define SPDK_CONFIG_VTUNE_DIR 00:13:16.314 #define SPDK_CONFIG_WERROR 1 00:13:16.314 #define SPDK_CONFIG_WPDK_DIR 00:13:16.314 #undef SPDK_CONFIG_XNVME 00:13:16.314 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:16.314 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:16.315 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:13:16.316 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2535863 ]] 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2535863 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.T0Utlk 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.T0Utlk/tests/target /tmp/spdk.T0Utlk 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=117052502016 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356509184 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12304007168 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64666886144 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678252544 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847934976 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871302656 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23367680 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677634048 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678256640 00:13:16.317 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=622592 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935634944 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935647232 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:13:16.318 * Looking for test storage... 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=117052502016 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=14518599680 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:13:16.318 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:16.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.580 --rc genhtml_branch_coverage=1 00:13:16.580 --rc genhtml_function_coverage=1 00:13:16.580 --rc genhtml_legend=1 00:13:16.580 --rc geninfo_all_blocks=1 00:13:16.580 --rc geninfo_unexecuted_blocks=1 00:13:16.580 00:13:16.580 ' 00:13:16.580 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:16.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.580 --rc genhtml_branch_coverage=1 00:13:16.580 --rc genhtml_function_coverage=1 00:13:16.580 --rc genhtml_legend=1 00:13:16.580 --rc geninfo_all_blocks=1 00:13:16.580 --rc geninfo_unexecuted_blocks=1 00:13:16.580 00:13:16.581 ' 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:16.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.581 --rc genhtml_branch_coverage=1 00:13:16.581 --rc genhtml_function_coverage=1 00:13:16.581 --rc genhtml_legend=1 00:13:16.581 --rc geninfo_all_blocks=1 00:13:16.581 --rc geninfo_unexecuted_blocks=1 00:13:16.581 00:13:16.581 ' 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:16.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.581 --rc genhtml_branch_coverage=1 00:13:16.581 --rc genhtml_function_coverage=1 00:13:16.581 --rc genhtml_legend=1 00:13:16.581 --rc geninfo_all_blocks=1 00:13:16.581 --rc geninfo_unexecuted_blocks=1 00:13:16.581 00:13:16.581 ' 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:16.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:13:16.581 17:40:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:24.730 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:24.730 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:24.730 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:24.731 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:24.731 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # is_hw=yes 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:24.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:13:24.731 00:13:24.731 --- 10.0.0.2 ping statistics --- 00:13:24.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.731 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:24.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:13:24.731 00:13:24.731 --- 10.0.0.1 ping statistics --- 00:13:24.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.731 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # return 0 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:24.731 ************************************ 00:13:24.731 START TEST nvmf_filesystem_no_in_capsule 00:13:24.731 ************************************ 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=2539786 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 2539786 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2539786 ']' 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:24.731 17:40:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.731 [2024-11-20 17:40:23.941233] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:24.731 [2024-11-20 17:40:23.941299] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.731 [2024-11-20 17:40:24.028606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.731 [2024-11-20 17:40:24.076900] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.731 [2024-11-20 17:40:24.076953] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.731 [2024-11-20 17:40:24.076961] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.731 [2024-11-20 17:40:24.076969] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.731 [2024-11-20 17:40:24.076975] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.731 [2024-11-20 17:40:24.077131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.732 [2024-11-20 17:40:24.077289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.732 [2024-11-20 17:40:24.077561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.732 [2024-11-20 17:40:24.077563] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.995 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:24.995 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:24.995 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:24.995 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:24.995 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.995 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.995 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:24.995 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:24.995 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.995 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.995 [2024-11-20 17:40:24.820422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.995 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.995 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:24.995 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.995 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.256 Malloc1 00:13:25.256 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.256 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:25.256 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.256 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.256 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.256 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:25.256 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.256 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.256 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.256 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.256 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.256 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.256 [2024-11-20 17:40:24.971850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.256 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.257 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:25.257 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:25.257 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:25.257 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:25.257 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:25.257 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:25.257 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.257 17:40:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.257 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.257 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:25.257 { 00:13:25.257 "name": "Malloc1", 00:13:25.257 "aliases": [ 00:13:25.257 "5a3c2ce7-98de-40fc-b0b8-c5b8022e1256" 00:13:25.257 ], 00:13:25.257 "product_name": "Malloc disk", 00:13:25.257 "block_size": 512, 00:13:25.257 "num_blocks": 1048576, 00:13:25.257 "uuid": "5a3c2ce7-98de-40fc-b0b8-c5b8022e1256", 00:13:25.257 "assigned_rate_limits": { 00:13:25.257 "rw_ios_per_sec": 0, 00:13:25.257 "rw_mbytes_per_sec": 0, 00:13:25.257 "r_mbytes_per_sec": 0, 00:13:25.257 "w_mbytes_per_sec": 0 00:13:25.257 }, 00:13:25.257 "claimed": true, 00:13:25.257 "claim_type": "exclusive_write", 00:13:25.257 "zoned": false, 00:13:25.257 "supported_io_types": { 00:13:25.257 "read": true, 00:13:25.257 "write": true, 00:13:25.257 "unmap": true, 00:13:25.257 "flush": true, 00:13:25.257 "reset": true, 00:13:25.257 "nvme_admin": false, 00:13:25.257 "nvme_io": false, 00:13:25.257 "nvme_io_md": false, 00:13:25.257 "write_zeroes": true, 00:13:25.257 "zcopy": true, 00:13:25.257 "get_zone_info": false, 00:13:25.257 "zone_management": false, 00:13:25.257 "zone_append": false, 00:13:25.257 "compare": false, 00:13:25.257 "compare_and_write": false, 00:13:25.257 "abort": true, 00:13:25.257 "seek_hole": false, 00:13:25.257 "seek_data": false, 00:13:25.257 "copy": true, 00:13:25.257 "nvme_iov_md": false 00:13:25.257 }, 00:13:25.257 "memory_domains": [ 00:13:25.257 { 00:13:25.257 "dma_device_id": "system", 00:13:25.257 "dma_device_type": 1 00:13:25.257 }, 00:13:25.257 { 00:13:25.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.257 "dma_device_type": 2 00:13:25.257 } 00:13:25.257 ], 00:13:25.257 "driver_specific": {} 00:13:25.257 } 00:13:25.257 ]' 00:13:25.257 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:25.257 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:25.257 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:25.257 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:25.257 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:25.257 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:25.257 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:25.257 17:40:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.170 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:27.170 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.170 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.170 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:27.170 17:40:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:29.084 17:40:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:29.656 17:40:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:30.634 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:30.634 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:30.634 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:30.634 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:30.634 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:30.634 ************************************ 00:13:30.634 START TEST filesystem_ext4 00:13:30.634 ************************************ 00:13:30.634 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:30.634 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:30.634 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:30.634 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:30.634 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:30.634 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:30.634 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:30.634 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:30.635 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:30.635 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:30.635 17:40:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:30.635 mke2fs 1.47.0 (5-Feb-2023) 00:13:30.635 Discarding device blocks: 0/522240 done 00:13:30.635 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:30.635 Filesystem UUID: 9e81477d-5bb4-4076-84eb-d7b3c98dd0f0 00:13:30.635 Superblock backups stored on blocks: 00:13:30.635 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:30.635 00:13:30.635 Allocating group tables: 0/64 done 00:13:30.635 Writing inode tables: 0/64 done 00:13:30.635 Creating journal (8192 blocks): done 00:13:32.940 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:13:32.940 00:13:32.940 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:32.940 17:40:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:38.245 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:38.245 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:38.245 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:38.245 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:38.245 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:38.245 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:38.245 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2539786 00:13:38.245 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:38.245 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:38.506 00:13:38.506 real 0m7.862s 00:13:38.506 user 0m0.022s 00:13:38.506 sys 0m0.084s 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:38.506 ************************************ 00:13:38.506 END TEST filesystem_ext4 00:13:38.506 ************************************ 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:38.506 ************************************ 00:13:38.506 START TEST filesystem_btrfs 00:13:38.506 ************************************ 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:38.506 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:38.768 btrfs-progs v6.8.1 00:13:38.768 See https://btrfs.readthedocs.io for more information. 00:13:38.768 00:13:38.768 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:38.768 NOTE: several default settings have changed in version 5.15, please make sure 00:13:38.768 this does not affect your deployments: 00:13:38.768 - DUP for metadata (-m dup) 00:13:38.768 - enabled no-holes (-O no-holes) 00:13:38.768 - enabled free-space-tree (-R free-space-tree) 00:13:38.768 00:13:38.768 Label: (null) 00:13:38.768 UUID: fe5d6582-e6e0-49e2-ada7-7fdaa5111235 00:13:38.768 Node size: 16384 00:13:38.768 Sector size: 4096 (CPU page size: 4096) 00:13:38.768 Filesystem size: 510.00MiB 00:13:38.768 Block group profiles: 00:13:38.768 Data: single 8.00MiB 00:13:38.768 Metadata: DUP 32.00MiB 00:13:38.768 System: DUP 8.00MiB 00:13:38.768 SSD detected: yes 00:13:38.768 Zoned device: no 00:13:38.768 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:38.768 Checksum: crc32c 00:13:38.768 Number of devices: 1 00:13:38.768 Devices: 00:13:38.768 ID SIZE PATH 00:13:38.768 1 510.00MiB /dev/nvme0n1p1 00:13:38.768 00:13:38.768 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:38.768 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:39.029 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:39.029 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:39.290 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:39.290 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:39.290 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:39.290 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:39.290 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2539786 00:13:39.290 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:39.290 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:39.290 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:39.290 17:40:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:39.290 00:13:39.290 real 0m0.756s 00:13:39.290 user 0m0.029s 00:13:39.290 sys 0m0.116s 00:13:39.290 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.290 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:39.290 ************************************ 00:13:39.290 END TEST filesystem_btrfs 00:13:39.290 ************************************ 00:13:39.290 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:39.290 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:39.290 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.290 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:39.290 ************************************ 00:13:39.290 START TEST filesystem_xfs 00:13:39.290 ************************************ 00:13:39.290 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:39.290 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:39.290 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:39.290 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:39.290 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:39.290 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:39.290 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:39.290 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:13:39.290 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:39.290 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:39.290 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:39.290 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:39.290 = sectsz=512 attr=2, projid32bit=1 00:13:39.290 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:39.290 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:39.290 data = bsize=4096 blocks=130560, imaxpct=25 00:13:39.290 = sunit=0 swidth=0 blks 00:13:39.290 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:39.290 log =internal log bsize=4096 blocks=16384, version=2 00:13:39.290 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:39.290 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:40.240 Discarding blocks...Done. 00:13:40.240 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:40.240 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:42.789 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:42.789 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:42.789 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:42.789 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:42.789 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:42.789 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:42.789 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2539786 00:13:42.789 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:42.789 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:42.789 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:42.789 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:42.789 00:13:42.789 real 0m3.438s 00:13:42.789 user 0m0.030s 00:13:42.789 sys 0m0.075s 00:13:42.789 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.789 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:42.789 ************************************ 00:13:42.789 END TEST filesystem_xfs 00:13:42.789 ************************************ 00:13:42.789 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:42.789 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2539786 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2539786 ']' 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2539786 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2539786 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2539786' 00:13:43.360 killing process with pid 2539786 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2539786 00:13:43.360 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2539786 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:43.621 00:13:43.621 real 0m19.550s 00:13:43.621 user 1m17.260s 00:13:43.621 sys 0m1.459s 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:43.621 ************************************ 00:13:43.621 END TEST nvmf_filesystem_no_in_capsule 00:13:43.621 ************************************ 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:43.621 ************************************ 00:13:43.621 START TEST nvmf_filesystem_in_capsule 00:13:43.621 ************************************ 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=2543750 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 2543750 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2543750 ']' 00:13:43.621 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.622 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:43.622 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.622 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:43.622 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:43.882 [2024-11-20 17:40:43.566224] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:43.882 [2024-11-20 17:40:43.566264] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.882 [2024-11-20 17:40:43.640305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:43.882 [2024-11-20 17:40:43.669272] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.882 [2024-11-20 17:40:43.669306] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.883 [2024-11-20 17:40:43.669311] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.883 [2024-11-20 17:40:43.669316] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.883 [2024-11-20 17:40:43.669320] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.883 [2024-11-20 17:40:43.669465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.883 [2024-11-20 17:40:43.669616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.883 [2024-11-20 17:40:43.669771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.883 [2024-11-20 17:40:43.669773] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.883 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.883 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:43.883 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:43.883 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:43.883 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.144 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.144 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:44.144 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:44.144 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.144 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.144 [2024-11-20 17:40:43.802228] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.144 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.145 Malloc1 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.145 [2024-11-20 17:40:43.924873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:44.145 { 00:13:44.145 "name": "Malloc1", 00:13:44.145 "aliases": [ 00:13:44.145 "136f921f-675a-427c-9d9b-a5ee353e7031" 00:13:44.145 ], 00:13:44.145 "product_name": "Malloc disk", 00:13:44.145 "block_size": 512, 00:13:44.145 "num_blocks": 1048576, 00:13:44.145 "uuid": "136f921f-675a-427c-9d9b-a5ee353e7031", 00:13:44.145 "assigned_rate_limits": { 00:13:44.145 "rw_ios_per_sec": 0, 00:13:44.145 "rw_mbytes_per_sec": 0, 00:13:44.145 "r_mbytes_per_sec": 0, 00:13:44.145 "w_mbytes_per_sec": 0 00:13:44.145 }, 00:13:44.145 "claimed": true, 00:13:44.145 "claim_type": "exclusive_write", 00:13:44.145 "zoned": false, 00:13:44.145 "supported_io_types": { 00:13:44.145 "read": true, 00:13:44.145 "write": true, 00:13:44.145 "unmap": true, 00:13:44.145 "flush": true, 00:13:44.145 "reset": true, 00:13:44.145 "nvme_admin": false, 00:13:44.145 "nvme_io": false, 00:13:44.145 "nvme_io_md": false, 00:13:44.145 "write_zeroes": true, 00:13:44.145 "zcopy": true, 00:13:44.145 "get_zone_info": false, 00:13:44.145 "zone_management": false, 00:13:44.145 "zone_append": false, 00:13:44.145 "compare": false, 00:13:44.145 "compare_and_write": false, 00:13:44.145 "abort": true, 00:13:44.145 "seek_hole": false, 00:13:44.145 "seek_data": false, 00:13:44.145 "copy": true, 00:13:44.145 "nvme_iov_md": false 00:13:44.145 }, 00:13:44.145 "memory_domains": [ 00:13:44.145 { 00:13:44.145 "dma_device_id": "system", 00:13:44.145 "dma_device_type": 1 00:13:44.145 }, 00:13:44.145 { 00:13:44.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.145 "dma_device_type": 2 00:13:44.145 } 00:13:44.145 ], 00:13:44.145 "driver_specific": {} 00:13:44.145 } 00:13:44.145 ]' 00:13:44.145 17:40:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:44.145 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:44.145 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:44.145 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:44.145 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:44.145 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:44.145 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:44.145 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:46.061 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:46.061 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:46.061 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.061 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:46.061 17:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:47.973 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:47.973 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:47.973 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:47.973 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:47.973 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:47.973 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:47.973 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:47.973 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:47.973 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:47.973 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:47.973 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:47.973 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:47.973 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:47.973 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:47.973 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:47.973 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:47.973 17:40:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:48.233 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:48.495 17:40:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:49.438 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:49.438 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:49.438 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:49.438 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:49.438 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:49.438 ************************************ 00:13:49.438 START TEST filesystem_in_capsule_ext4 00:13:49.438 ************************************ 00:13:49.438 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:49.438 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:49.438 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:49.438 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:49.438 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:49.438 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:49.438 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:49.438 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:49.438 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:49.438 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:49.438 17:40:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:49.438 mke2fs 1.47.0 (5-Feb-2023) 00:13:49.438 Discarding device blocks: 0/522240 done 00:13:49.702 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:49.702 Filesystem UUID: 8a2de94f-0518-4a49-a5a7-a43310982a82 00:13:49.702 Superblock backups stored on blocks: 00:13:49.702 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:49.702 00:13:49.702 Allocating group tables: 0/64 done 00:13:49.702 Writing inode tables: 0/64 done 00:13:52.246 Creating journal (8192 blocks): done 00:13:52.507 Writing superblocks and filesystem accounting information: 0/64 done 00:13:52.507 00:13:52.507 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:52.507 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:59.091 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:59.091 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:59.091 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:59.091 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:59.091 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:59.091 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:59.091 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2543750 00:13:59.091 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:59.091 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:59.091 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:59.091 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:59.091 00:13:59.091 real 0m8.564s 00:13:59.091 user 0m0.029s 00:13:59.091 sys 0m0.079s 00:13:59.091 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.092 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:59.092 ************************************ 00:13:59.092 END TEST filesystem_in_capsule_ext4 00:13:59.092 ************************************ 00:13:59.092 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:59.092 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:59.092 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.092 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.092 ************************************ 00:13:59.092 START TEST filesystem_in_capsule_btrfs 00:13:59.092 ************************************ 00:13:59.092 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:59.092 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:59.092 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:59.092 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:59.092 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:59.092 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:59.092 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:59.092 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:59.092 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:59.092 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:59.092 17:40:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:59.092 btrfs-progs v6.8.1 00:13:59.092 See https://btrfs.readthedocs.io for more information. 00:13:59.092 00:13:59.092 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:59.092 NOTE: several default settings have changed in version 5.15, please make sure 00:13:59.092 this does not affect your deployments: 00:13:59.092 - DUP for metadata (-m dup) 00:13:59.092 - enabled no-holes (-O no-holes) 00:13:59.092 - enabled free-space-tree (-R free-space-tree) 00:13:59.092 00:13:59.092 Label: (null) 00:13:59.092 UUID: 7581d3d9-e0fc-4f6f-b006-45e37616b2ce 00:13:59.092 Node size: 16384 00:13:59.092 Sector size: 4096 (CPU page size: 4096) 00:13:59.092 Filesystem size: 510.00MiB 00:13:59.092 Block group profiles: 00:13:59.092 Data: single 8.00MiB 00:13:59.092 Metadata: DUP 32.00MiB 00:13:59.092 System: DUP 8.00MiB 00:13:59.092 SSD detected: yes 00:13:59.092 Zoned device: no 00:13:59.092 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:59.092 Checksum: crc32c 00:13:59.092 Number of devices: 1 00:13:59.092 Devices: 00:13:59.092 ID SIZE PATH 00:13:59.092 1 510.00MiB /dev/nvme0n1p1 00:13:59.092 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2543750 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:59.092 00:13:59.092 real 0m0.715s 00:13:59.092 user 0m0.021s 00:13:59.092 sys 0m0.128s 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:59.092 ************************************ 00:13:59.092 END TEST filesystem_in_capsule_btrfs 00:13:59.092 ************************************ 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:59.092 ************************************ 00:13:59.092 START TEST filesystem_in_capsule_xfs 00:13:59.092 ************************************ 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:59.092 17:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:59.092 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:59.092 = sectsz=512 attr=2, projid32bit=1 00:13:59.092 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:59.092 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:59.092 data = bsize=4096 blocks=130560, imaxpct=25 00:13:59.092 = sunit=0 swidth=0 blks 00:13:59.092 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:59.092 log =internal log bsize=4096 blocks=16384, version=2 00:13:59.092 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:59.092 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:00.033 Discarding blocks...Done. 00:14:00.033 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:14:00.033 17:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:02.578 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:02.578 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:02.578 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:02.578 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:02.578 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:02.578 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:02.578 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2543750 00:14:02.578 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:02.578 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:02.578 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:02.578 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:02.578 00:14:02.578 real 0m3.623s 00:14:02.578 user 0m0.023s 00:14:02.578 sys 0m0.082s 00:14:02.579 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:02.579 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:02.579 ************************************ 00:14:02.579 END TEST filesystem_in_capsule_xfs 00:14:02.579 ************************************ 00:14:02.579 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:02.579 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:02.579 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2543750 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2543750 ']' 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2543750 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2543750 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2543750' 00:14:02.839 killing process with pid 2543750 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2543750 00:14:02.839 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2543750 00:14:03.100 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:03.100 00:14:03.100 real 0m19.401s 00:14:03.100 user 1m16.671s 00:14:03.100 sys 0m1.415s 00:14:03.100 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:03.100 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:03.100 ************************************ 00:14:03.100 END TEST nvmf_filesystem_in_capsule 00:14:03.100 ************************************ 00:14:03.100 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:03.100 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:03.100 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:14:03.100 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:03.100 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:14:03.100 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:03.100 17:41:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:03.100 rmmod nvme_tcp 00:14:03.100 rmmod nvme_fabrics 00:14:03.100 rmmod nvme_keyring 00:14:03.361 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:03.361 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:14:03.361 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:14:03.361 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:14:03.361 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:03.361 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:03.361 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:03.361 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:14:03.361 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:14:03.361 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:03.361 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:14:03.361 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:03.361 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:03.361 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.361 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.361 17:41:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.273 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:05.273 00:14:05.273 real 0m49.340s 00:14:05.273 user 2m36.346s 00:14:05.273 sys 0m8.792s 00:14:05.273 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:05.273 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:05.273 ************************************ 00:14:05.273 END TEST nvmf_filesystem 00:14:05.273 ************************************ 00:14:05.273 17:41:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:05.273 17:41:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:05.273 17:41:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.273 17:41:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:05.534 ************************************ 00:14:05.535 START TEST nvmf_target_discovery 00:14:05.535 ************************************ 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:05.535 * Looking for test storage... 00:14:05.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:05.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.535 --rc genhtml_branch_coverage=1 00:14:05.535 --rc genhtml_function_coverage=1 00:14:05.535 --rc genhtml_legend=1 00:14:05.535 --rc geninfo_all_blocks=1 00:14:05.535 --rc geninfo_unexecuted_blocks=1 00:14:05.535 00:14:05.535 ' 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:05.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.535 --rc genhtml_branch_coverage=1 00:14:05.535 --rc genhtml_function_coverage=1 00:14:05.535 --rc genhtml_legend=1 00:14:05.535 --rc geninfo_all_blocks=1 00:14:05.535 --rc geninfo_unexecuted_blocks=1 00:14:05.535 00:14:05.535 ' 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:05.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.535 --rc genhtml_branch_coverage=1 00:14:05.535 --rc genhtml_function_coverage=1 00:14:05.535 --rc genhtml_legend=1 00:14:05.535 --rc geninfo_all_blocks=1 00:14:05.535 --rc geninfo_unexecuted_blocks=1 00:14:05.535 00:14:05.535 ' 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:05.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.535 --rc genhtml_branch_coverage=1 00:14:05.535 --rc genhtml_function_coverage=1 00:14:05.535 --rc genhtml_legend=1 00:14:05.535 --rc geninfo_all_blocks=1 00:14:05.535 --rc geninfo_unexecuted_blocks=1 00:14:05.535 00:14:05.535 ' 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.535 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.536 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.536 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.536 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.536 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:05.536 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.536 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:14:05.536 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:05.536 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:05.536 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.536 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.536 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.536 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:05.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:05.536 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:05.536 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:05.536 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:05.798 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:05.798 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:05.798 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:05.798 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:05.798 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:05.798 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:05.798 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.798 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:05.798 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:05.798 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:05.798 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.798 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.798 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.798 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:14:05.798 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:14:05.798 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:14:05.798 17:41:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.940 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:13.940 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:14:13.940 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:13.940 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:13.940 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:13.940 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:13.940 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:13.940 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:14:13.940 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:13.940 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:14:13.940 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:14:13.940 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:14:13.940 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:14:13.940 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:14:13.940 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:13.941 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:13.941 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:13.941 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:13.941 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:13.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:14:13.941 00:14:13.941 --- 10.0.0.2 ping statistics --- 00:14:13.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.941 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:14:13.941 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:13.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:14:13.942 00:14:13.942 --- 10.0.0.1 ping statistics --- 00:14:13.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.942 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # return 0 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=2551987 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 2551987 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2551987 ']' 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:13.942 17:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.942 [2024-11-20 17:41:13.020738] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:13.942 [2024-11-20 17:41:13.020803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.942 [2024-11-20 17:41:13.108343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:13.942 [2024-11-20 17:41:13.156406] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.942 [2024-11-20 17:41:13.156459] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.942 [2024-11-20 17:41:13.156467] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.942 [2024-11-20 17:41:13.156474] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.942 [2024-11-20 17:41:13.156480] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.942 [2024-11-20 17:41:13.156641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.942 [2024-11-20 17:41:13.156803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.942 [2024-11-20 17:41:13.156959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.942 [2024-11-20 17:41:13.156961] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.942 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.942 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:14:13.942 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.205 [2024-11-20 17:41:13.903467] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.205 Null1 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.205 [2024-11-20 17:41:13.963900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.205 Null2 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.205 17:41:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.205 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.205 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:14.205 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.205 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.205 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.205 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:14.205 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.206 Null3 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.206 Null4 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.206 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.468 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.468 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:14.468 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.468 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.468 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.468 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:14.468 00:14:14.468 Discovery Log Number of Records 6, Generation counter 6 00:14:14.468 =====Discovery Log Entry 0====== 00:14:14.468 trtype: tcp 00:14:14.468 adrfam: ipv4 00:14:14.468 subtype: current discovery subsystem 00:14:14.468 treq: not required 00:14:14.468 portid: 0 00:14:14.468 trsvcid: 4420 00:14:14.468 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:14.468 traddr: 10.0.0.2 00:14:14.468 eflags: explicit discovery connections, duplicate discovery information 00:14:14.468 sectype: none 00:14:14.468 =====Discovery Log Entry 1====== 00:14:14.468 trtype: tcp 00:14:14.468 adrfam: ipv4 00:14:14.468 subtype: nvme subsystem 00:14:14.468 treq: not required 00:14:14.468 portid: 0 00:14:14.468 trsvcid: 4420 00:14:14.468 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:14.468 traddr: 10.0.0.2 00:14:14.468 eflags: none 00:14:14.468 sectype: none 00:14:14.468 =====Discovery Log Entry 2====== 00:14:14.468 trtype: tcp 00:14:14.468 adrfam: ipv4 00:14:14.468 subtype: nvme subsystem 00:14:14.468 treq: not required 00:14:14.468 portid: 0 00:14:14.468 trsvcid: 4420 00:14:14.468 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:14.468 traddr: 10.0.0.2 00:14:14.468 eflags: none 00:14:14.468 sectype: none 00:14:14.468 =====Discovery Log Entry 3====== 00:14:14.468 trtype: tcp 00:14:14.468 adrfam: ipv4 00:14:14.468 subtype: nvme subsystem 00:14:14.468 treq: not required 00:14:14.468 portid: 0 00:14:14.469 trsvcid: 4420 00:14:14.469 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:14.469 traddr: 10.0.0.2 00:14:14.469 eflags: none 00:14:14.469 sectype: none 00:14:14.469 =====Discovery Log Entry 4====== 00:14:14.469 trtype: tcp 00:14:14.469 adrfam: ipv4 00:14:14.469 subtype: nvme subsystem 00:14:14.469 treq: not required 00:14:14.469 portid: 0 00:14:14.469 trsvcid: 4420 00:14:14.469 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:14.469 traddr: 10.0.0.2 00:14:14.469 eflags: none 00:14:14.469 sectype: none 00:14:14.469 =====Discovery Log Entry 5====== 00:14:14.469 trtype: tcp 00:14:14.469 adrfam: ipv4 00:14:14.469 subtype: discovery subsystem referral 00:14:14.469 treq: not required 00:14:14.469 portid: 0 00:14:14.469 trsvcid: 4430 00:14:14.469 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:14.469 traddr: 10.0.0.2 00:14:14.469 eflags: none 00:14:14.469 sectype: none 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:14.469 Perform nvmf subsystem discovery via RPC 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.469 [ 00:14:14.469 { 00:14:14.469 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:14.469 "subtype": "Discovery", 00:14:14.469 "listen_addresses": [ 00:14:14.469 { 00:14:14.469 "trtype": "TCP", 00:14:14.469 "adrfam": "IPv4", 00:14:14.469 "traddr": "10.0.0.2", 00:14:14.469 "trsvcid": "4420" 00:14:14.469 } 00:14:14.469 ], 00:14:14.469 "allow_any_host": true, 00:14:14.469 "hosts": [] 00:14:14.469 }, 00:14:14.469 { 00:14:14.469 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:14.469 "subtype": "NVMe", 00:14:14.469 "listen_addresses": [ 00:14:14.469 { 00:14:14.469 "trtype": "TCP", 00:14:14.469 "adrfam": "IPv4", 00:14:14.469 "traddr": "10.0.0.2", 00:14:14.469 "trsvcid": "4420" 00:14:14.469 } 00:14:14.469 ], 00:14:14.469 "allow_any_host": true, 00:14:14.469 "hosts": [], 00:14:14.469 "serial_number": "SPDK00000000000001", 00:14:14.469 "model_number": "SPDK bdev Controller", 00:14:14.469 "max_namespaces": 32, 00:14:14.469 "min_cntlid": 1, 00:14:14.469 "max_cntlid": 65519, 00:14:14.469 "namespaces": [ 00:14:14.469 { 00:14:14.469 "nsid": 1, 00:14:14.469 "bdev_name": "Null1", 00:14:14.469 "name": "Null1", 00:14:14.469 "nguid": "B8CC9E85EA7D43A78440E598DB3E9D16", 00:14:14.469 "uuid": "b8cc9e85-ea7d-43a7-8440-e598db3e9d16" 00:14:14.469 } 00:14:14.469 ] 00:14:14.469 }, 00:14:14.469 { 00:14:14.469 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:14.469 "subtype": "NVMe", 00:14:14.469 "listen_addresses": [ 00:14:14.469 { 00:14:14.469 "trtype": "TCP", 00:14:14.469 "adrfam": "IPv4", 00:14:14.469 "traddr": "10.0.0.2", 00:14:14.469 "trsvcid": "4420" 00:14:14.469 } 00:14:14.469 ], 00:14:14.469 "allow_any_host": true, 00:14:14.469 "hosts": [], 00:14:14.469 "serial_number": "SPDK00000000000002", 00:14:14.469 "model_number": "SPDK bdev Controller", 00:14:14.469 "max_namespaces": 32, 00:14:14.469 "min_cntlid": 1, 00:14:14.469 "max_cntlid": 65519, 00:14:14.469 "namespaces": [ 00:14:14.469 { 00:14:14.469 "nsid": 1, 00:14:14.469 "bdev_name": "Null2", 00:14:14.469 "name": "Null2", 00:14:14.469 "nguid": "492B5AEF6ECD4D72AEDEFAA461E4CCAB", 00:14:14.469 "uuid": "492b5aef-6ecd-4d72-aede-faa461e4ccab" 00:14:14.469 } 00:14:14.469 ] 00:14:14.469 }, 00:14:14.469 { 00:14:14.469 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:14.469 "subtype": "NVMe", 00:14:14.469 "listen_addresses": [ 00:14:14.469 { 00:14:14.469 "trtype": "TCP", 00:14:14.469 "adrfam": "IPv4", 00:14:14.469 "traddr": "10.0.0.2", 00:14:14.469 "trsvcid": "4420" 00:14:14.469 } 00:14:14.469 ], 00:14:14.469 "allow_any_host": true, 00:14:14.469 "hosts": [], 00:14:14.469 "serial_number": "SPDK00000000000003", 00:14:14.469 "model_number": "SPDK bdev Controller", 00:14:14.469 "max_namespaces": 32, 00:14:14.469 "min_cntlid": 1, 00:14:14.469 "max_cntlid": 65519, 00:14:14.469 "namespaces": [ 00:14:14.469 { 00:14:14.469 "nsid": 1, 00:14:14.469 "bdev_name": "Null3", 00:14:14.469 "name": "Null3", 00:14:14.469 "nguid": "472E6421F08C4FDE9BE7F79CB16648D6", 00:14:14.469 "uuid": "472e6421-f08c-4fde-9be7-f79cb16648d6" 00:14:14.469 } 00:14:14.469 ] 00:14:14.469 }, 00:14:14.469 { 00:14:14.469 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:14.469 "subtype": "NVMe", 00:14:14.469 "listen_addresses": [ 00:14:14.469 { 00:14:14.469 "trtype": "TCP", 00:14:14.469 "adrfam": "IPv4", 00:14:14.469 "traddr": "10.0.0.2", 00:14:14.469 "trsvcid": "4420" 00:14:14.469 } 00:14:14.469 ], 00:14:14.469 "allow_any_host": true, 00:14:14.469 "hosts": [], 00:14:14.469 "serial_number": "SPDK00000000000004", 00:14:14.469 "model_number": "SPDK bdev Controller", 00:14:14.469 "max_namespaces": 32, 00:14:14.469 "min_cntlid": 1, 00:14:14.469 "max_cntlid": 65519, 00:14:14.469 "namespaces": [ 00:14:14.469 { 00:14:14.469 "nsid": 1, 00:14:14.469 "bdev_name": "Null4", 00:14:14.469 "name": "Null4", 00:14:14.469 "nguid": "9B4980BDB0D94DE8BBF5ED49BF2605BA", 00:14:14.469 "uuid": "9b4980bd-b0d9-4de8-bbf5-ed49bf2605ba" 00:14:14.469 } 00:14:14.469 ] 00:14:14.469 } 00:14:14.469 ] 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.469 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.470 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:14.470 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:14.470 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.470 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:14.732 rmmod nvme_tcp 00:14:14.732 rmmod nvme_fabrics 00:14:14.732 rmmod nvme_keyring 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 2551987 ']' 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 2551987 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2551987 ']' 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2551987 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2551987 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2551987' 00:14:14.732 killing process with pid 2551987 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2551987 00:14:14.732 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2551987 00:14:14.995 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:14.995 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:14.995 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:14.995 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:14:14.995 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:14:14.995 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:14.995 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:14:14.995 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:14.995 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:14.995 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.995 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.995 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.545 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:17.545 00:14:17.545 real 0m11.658s 00:14:17.545 user 0m8.813s 00:14:17.545 sys 0m6.145s 00:14:17.545 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.545 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.545 ************************************ 00:14:17.545 END TEST nvmf_target_discovery 00:14:17.545 ************************************ 00:14:17.545 17:41:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:17.545 17:41:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:17.545 17:41:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.545 17:41:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:17.545 ************************************ 00:14:17.545 START TEST nvmf_referrals 00:14:17.545 ************************************ 00:14:17.545 17:41:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:17.545 * Looking for test storage... 00:14:17.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:17.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.545 --rc genhtml_branch_coverage=1 00:14:17.545 --rc genhtml_function_coverage=1 00:14:17.545 --rc genhtml_legend=1 00:14:17.545 --rc geninfo_all_blocks=1 00:14:17.545 --rc geninfo_unexecuted_blocks=1 00:14:17.545 00:14:17.545 ' 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:17.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.545 --rc genhtml_branch_coverage=1 00:14:17.545 --rc genhtml_function_coverage=1 00:14:17.545 --rc genhtml_legend=1 00:14:17.545 --rc geninfo_all_blocks=1 00:14:17.545 --rc geninfo_unexecuted_blocks=1 00:14:17.545 00:14:17.545 ' 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:17.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.545 --rc genhtml_branch_coverage=1 00:14:17.545 --rc genhtml_function_coverage=1 00:14:17.545 --rc genhtml_legend=1 00:14:17.545 --rc geninfo_all_blocks=1 00:14:17.545 --rc geninfo_unexecuted_blocks=1 00:14:17.545 00:14:17.545 ' 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:17.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.545 --rc genhtml_branch_coverage=1 00:14:17.545 --rc genhtml_function_coverage=1 00:14:17.545 --rc genhtml_legend=1 00:14:17.545 --rc geninfo_all_blocks=1 00:14:17.545 --rc geninfo_unexecuted_blocks=1 00:14:17.545 00:14:17.545 ' 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.545 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:17.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:14:17.546 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.689 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.689 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:14:25.689 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:25.689 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:25.689 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:25.689 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:25.689 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:25.690 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:25.690 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:25.690 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:25.690 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # is_hw=yes 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:25.690 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:25.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:14:25.691 00:14:25.691 --- 10.0.0.2 ping statistics --- 00:14:25.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.691 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:14:25.691 00:14:25.691 --- 10.0.0.1 ping statistics --- 00:14:25.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.691 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # return 0 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=2556646 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 2556646 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2556646 ']' 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.691 17:41:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.691 [2024-11-20 17:41:24.834441] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:25.691 [2024-11-20 17:41:24.834516] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.691 [2024-11-20 17:41:24.923413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:25.691 [2024-11-20 17:41:24.971717] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.691 [2024-11-20 17:41:24.971769] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.691 [2024-11-20 17:41:24.971781] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.691 [2024-11-20 17:41:24.971791] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.691 [2024-11-20 17:41:24.971800] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.691 [2024-11-20 17:41:24.971958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.691 [2024-11-20 17:41:24.972113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.691 [2024-11-20 17:41:24.972277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.691 [2024-11-20 17:41:24.972417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.954 [2024-11-20 17:41:25.720181] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.954 [2024-11-20 17:41:25.736453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.954 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.215 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:26.215 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:26.215 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:26.215 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:26.215 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:26.216 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:26.216 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:26.216 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:26.216 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:26.477 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:26.477 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:26.477 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:26.477 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.477 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:26.477 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.477 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:26.477 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.477 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:26.477 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.477 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:26.477 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:26.477 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:26.477 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:26.477 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.477 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:26.477 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:26.738 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.738 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:26.738 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:26.738 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:26.738 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:26.738 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:26.738 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:26.738 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:26.738 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:26.738 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:26.738 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:26.738 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:26.738 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:26.738 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:26.738 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:26.738 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:26.999 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:26.999 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:26.999 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:26.999 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:26.999 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:26.999 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:27.260 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:27.260 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:27.261 17:41:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:27.261 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:27.261 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:27.261 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:27.261 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:27.261 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:27.261 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:27.261 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:27.522 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:27.522 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:27.522 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:27.522 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:27.522 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:27.522 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:27.783 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:27.783 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:27.783 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.783 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.783 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.783 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:27.783 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:27.783 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.783 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:27.783 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.783 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:27.783 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:27.783 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:27.783 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:27.783 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:27.784 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:27.784 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:28.045 rmmod nvme_tcp 00:14:28.045 rmmod nvme_fabrics 00:14:28.045 rmmod nvme_keyring 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 2556646 ']' 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 2556646 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2556646 ']' 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2556646 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2556646 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2556646' 00:14:28.045 killing process with pid 2556646 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2556646 00:14:28.045 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2556646 00:14:28.307 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:28.307 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:28.307 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:28.307 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:14:28.307 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:14:28.307 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:28.307 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:14:28.307 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:28.307 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:28.307 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.307 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.307 17:41:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.220 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:30.481 00:14:30.481 real 0m13.179s 00:14:30.481 user 0m15.363s 00:14:30.481 sys 0m6.641s 00:14:30.481 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:30.481 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:30.481 ************************************ 00:14:30.481 END TEST nvmf_referrals 00:14:30.481 ************************************ 00:14:30.481 17:41:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:30.481 17:41:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:30.481 17:41:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:30.481 17:41:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:30.481 ************************************ 00:14:30.481 START TEST nvmf_connect_disconnect 00:14:30.481 ************************************ 00:14:30.481 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:30.481 * Looking for test storage... 00:14:30.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.481 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:30.481 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:14:30.481 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:30.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.743 --rc genhtml_branch_coverage=1 00:14:30.743 --rc genhtml_function_coverage=1 00:14:30.743 --rc genhtml_legend=1 00:14:30.743 --rc geninfo_all_blocks=1 00:14:30.743 --rc geninfo_unexecuted_blocks=1 00:14:30.743 00:14:30.743 ' 00:14:30.743 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:30.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.743 --rc genhtml_branch_coverage=1 00:14:30.744 --rc genhtml_function_coverage=1 00:14:30.744 --rc genhtml_legend=1 00:14:30.744 --rc geninfo_all_blocks=1 00:14:30.744 --rc geninfo_unexecuted_blocks=1 00:14:30.744 00:14:30.744 ' 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:30.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.744 --rc genhtml_branch_coverage=1 00:14:30.744 --rc genhtml_function_coverage=1 00:14:30.744 --rc genhtml_legend=1 00:14:30.744 --rc geninfo_all_blocks=1 00:14:30.744 --rc geninfo_unexecuted_blocks=1 00:14:30.744 00:14:30.744 ' 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:30.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.744 --rc genhtml_branch_coverage=1 00:14:30.744 --rc genhtml_function_coverage=1 00:14:30.744 --rc genhtml_legend=1 00:14:30.744 --rc geninfo_all_blocks=1 00:14:30.744 --rc geninfo_unexecuted_blocks=1 00:14:30.744 00:14:30.744 ' 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:14:30.744 17:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:14:38.971 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:38.972 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:38.972 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:38.972 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:38.972 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:38.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:14:38.972 00:14:38.972 --- 10.0.0.2 ping statistics --- 00:14:38.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.972 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:14:38.972 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:14:38.972 00:14:38.972 --- 10.0.0.1 ping statistics --- 00:14:38.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.972 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # return 0 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=2562441 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 2562441 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2562441 ']' 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.972 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:38.972 [2024-11-20 17:41:38.112239] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:38.972 [2024-11-20 17:41:38.112306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.972 [2024-11-20 17:41:38.200072] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.972 [2024-11-20 17:41:38.250866] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.972 [2024-11-20 17:41:38.250914] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.973 [2024-11-20 17:41:38.250926] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.973 [2024-11-20 17:41:38.250936] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.973 [2024-11-20 17:41:38.250944] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.973 [2024-11-20 17:41:38.251101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.973 [2024-11-20 17:41:38.251203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.973 [2024-11-20 17:41:38.251289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.973 [2024-11-20 17:41:38.251290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:39.234 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.234 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:14:39.234 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:39.234 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:39.234 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:39.234 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.234 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:39.234 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.234 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:39.234 [2024-11-20 17:41:38.986895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.234 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.234 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:39.234 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.234 17:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:39.234 [2024-11-20 17:41:39.056353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:14:39.234 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:41.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.241 [2024-11-20 17:41:45.934279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13df1d0 is same with the state(6) to be set 00:14:46.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:33.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:37.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:03.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:10.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:12.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:15.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:19.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:22.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:26.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:29.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:31.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:33.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:33.794 rmmod nvme_tcp 00:18:33.794 rmmod nvme_fabrics 00:18:33.794 rmmod nvme_keyring 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 2562441 ']' 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 2562441 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2562441 ']' 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2562441 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2562441 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2562441' 00:18:33.794 killing process with pid 2562441 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2562441 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2562441 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:33.794 17:45:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.456 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:36.457 00:18:36.457 real 4m5.544s 00:18:36.457 user 15m34.633s 00:18:36.457 sys 0m25.355s 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:36.457 ************************************ 00:18:36.457 END TEST nvmf_connect_disconnect 00:18:36.457 ************************************ 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:36.457 ************************************ 00:18:36.457 START TEST nvmf_multitarget 00:18:36.457 ************************************ 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:36.457 * Looking for test storage... 00:18:36.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:18:36.457 17:45:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:36.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.457 --rc genhtml_branch_coverage=1 00:18:36.457 --rc genhtml_function_coverage=1 00:18:36.457 --rc genhtml_legend=1 00:18:36.457 --rc geninfo_all_blocks=1 00:18:36.457 --rc geninfo_unexecuted_blocks=1 00:18:36.457 00:18:36.457 ' 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:36.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.457 --rc genhtml_branch_coverage=1 00:18:36.457 --rc genhtml_function_coverage=1 00:18:36.457 --rc genhtml_legend=1 00:18:36.457 --rc geninfo_all_blocks=1 00:18:36.457 --rc geninfo_unexecuted_blocks=1 00:18:36.457 00:18:36.457 ' 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:36.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.457 --rc genhtml_branch_coverage=1 00:18:36.457 --rc genhtml_function_coverage=1 00:18:36.457 --rc genhtml_legend=1 00:18:36.457 --rc geninfo_all_blocks=1 00:18:36.457 --rc geninfo_unexecuted_blocks=1 00:18:36.457 00:18:36.457 ' 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:36.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.457 --rc genhtml_branch_coverage=1 00:18:36.457 --rc genhtml_function_coverage=1 00:18:36.457 --rc genhtml_legend=1 00:18:36.457 --rc geninfo_all_blocks=1 00:18:36.457 --rc geninfo_unexecuted_blocks=1 00:18:36.457 00:18:36.457 ' 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.457 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:36.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:18:36.458 17:45:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:43.077 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:43.077 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:43.077 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:43.077 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.077 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # is_hw=yes 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:43.078 17:45:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:43.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:18:43.339 00:18:43.339 --- 10.0.0.2 ping statistics --- 00:18:43.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.339 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:43.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:18:43.339 00:18:43.339 --- 10.0.0.1 ping statistics --- 00:18:43.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.339 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # return 0 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:43.339 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:43.601 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:18:43.601 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:43.601 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:43.601 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:43.601 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=2614172 00:18:43.601 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 2614172 00:18:43.601 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:43.601 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2614172 ']' 00:18:43.601 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.601 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:43.601 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.601 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:43.601 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:43.601 [2024-11-20 17:45:43.363697] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:43.601 [2024-11-20 17:45:43.363761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.601 [2024-11-20 17:45:43.431324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:43.601 [2024-11-20 17:45:43.461157] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.601 [2024-11-20 17:45:43.461196] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.601 [2024-11-20 17:45:43.461204] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.601 [2024-11-20 17:45:43.461211] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.601 [2024-11-20 17:45:43.461216] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.601 [2024-11-20 17:45:43.461409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.601 [2024-11-20 17:45:43.461577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.601 [2024-11-20 17:45:43.461730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.601 [2024-11-20 17:45:43.461731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:43.862 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:43.862 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:18:43.862 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:43.862 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:43.862 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:43.862 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.862 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:43.862 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:43.862 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:18:43.862 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:18:43.862 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:18:44.123 "nvmf_tgt_1" 00:18:44.123 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:18:44.123 "nvmf_tgt_2" 00:18:44.123 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:44.123 17:45:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:18:44.123 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:18:44.123 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:18:44.383 true 00:18:44.383 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:18:44.383 true 00:18:44.383 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:44.383 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:44.644 rmmod nvme_tcp 00:18:44.644 rmmod nvme_fabrics 00:18:44.644 rmmod nvme_keyring 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 2614172 ']' 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 2614172 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2614172 ']' 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2614172 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2614172 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2614172' 00:18:44.644 killing process with pid 2614172 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2614172 00:18:44.644 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2614172 00:18:44.906 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:44.906 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:44.906 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:44.906 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:18:44.906 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:18:44.906 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:44.906 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:18:44.906 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:44.906 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:44.906 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.906 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.906 17:45:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.821 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:46.821 00:18:46.821 real 0m10.867s 00:18:46.821 user 0m7.339s 00:18:46.821 sys 0m5.852s 00:18:46.821 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:46.821 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:46.821 ************************************ 00:18:46.821 END TEST nvmf_multitarget 00:18:46.821 ************************************ 00:18:46.821 17:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:46.821 17:45:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:46.821 17:45:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:46.821 17:45:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:46.821 ************************************ 00:18:46.821 START TEST nvmf_rpc 00:18:46.821 ************************************ 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:47.083 * Looking for test storage... 00:18:47.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:47.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.083 --rc genhtml_branch_coverage=1 00:18:47.083 --rc genhtml_function_coverage=1 00:18:47.083 --rc genhtml_legend=1 00:18:47.083 --rc geninfo_all_blocks=1 00:18:47.083 --rc geninfo_unexecuted_blocks=1 00:18:47.083 00:18:47.083 ' 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:47.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.083 --rc genhtml_branch_coverage=1 00:18:47.083 --rc genhtml_function_coverage=1 00:18:47.083 --rc genhtml_legend=1 00:18:47.083 --rc geninfo_all_blocks=1 00:18:47.083 --rc geninfo_unexecuted_blocks=1 00:18:47.083 00:18:47.083 ' 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:47.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.083 --rc genhtml_branch_coverage=1 00:18:47.083 --rc genhtml_function_coverage=1 00:18:47.083 --rc genhtml_legend=1 00:18:47.083 --rc geninfo_all_blocks=1 00:18:47.083 --rc geninfo_unexecuted_blocks=1 00:18:47.083 00:18:47.083 ' 00:18:47.083 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:47.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.083 --rc genhtml_branch_coverage=1 00:18:47.083 --rc genhtml_function_coverage=1 00:18:47.083 --rc genhtml_legend=1 00:18:47.083 --rc geninfo_all_blocks=1 00:18:47.083 --rc geninfo_unexecuted_blocks=1 00:18:47.083 00:18:47.083 ' 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:47.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:18:47.084 17:45:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:55.229 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:55.229 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:55.229 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:55.229 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # is_hw=yes 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:55.229 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:55.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:18:55.230 00:18:55.230 --- 10.0.0.2 ping statistics --- 00:18:55.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.230 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:55.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:18:55.230 00:18:55.230 --- 10.0.0.1 ping statistics --- 00:18:55.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.230 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # return 0 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=2618631 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 2618631 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2618631 ']' 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:55.230 17:45:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:55.230 [2024-11-20 17:45:54.619802] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:55.230 [2024-11-20 17:45:54.619870] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.230 [2024-11-20 17:45:54.712554] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:55.230 [2024-11-20 17:45:54.759501] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.230 [2024-11-20 17:45:54.759552] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.230 [2024-11-20 17:45:54.759564] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.230 [2024-11-20 17:45:54.759574] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.230 [2024-11-20 17:45:54.759583] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.230 [2024-11-20 17:45:54.759737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.230 [2024-11-20 17:45:54.759767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.230 [2024-11-20 17:45:54.759893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.230 [2024-11-20 17:45:54.759894] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:55.802 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:55.802 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:18:55.802 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:55.802 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:55.802 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:55.802 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.802 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:55.802 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.802 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:55.802 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.802 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:18:55.802 "tick_rate": 2400000000, 00:18:55.802 "poll_groups": [ 00:18:55.802 { 00:18:55.802 "name": "nvmf_tgt_poll_group_000", 00:18:55.802 "admin_qpairs": 0, 00:18:55.802 "io_qpairs": 0, 00:18:55.802 "current_admin_qpairs": 0, 00:18:55.802 "current_io_qpairs": 0, 00:18:55.802 "pending_bdev_io": 0, 00:18:55.802 "completed_nvme_io": 0, 00:18:55.802 "transports": [] 00:18:55.802 }, 00:18:55.802 { 00:18:55.802 "name": "nvmf_tgt_poll_group_001", 00:18:55.802 "admin_qpairs": 0, 00:18:55.802 "io_qpairs": 0, 00:18:55.802 "current_admin_qpairs": 0, 00:18:55.802 "current_io_qpairs": 0, 00:18:55.802 "pending_bdev_io": 0, 00:18:55.802 "completed_nvme_io": 0, 00:18:55.802 "transports": [] 00:18:55.802 }, 00:18:55.802 { 00:18:55.802 "name": "nvmf_tgt_poll_group_002", 00:18:55.802 "admin_qpairs": 0, 00:18:55.802 "io_qpairs": 0, 00:18:55.802 "current_admin_qpairs": 0, 00:18:55.802 "current_io_qpairs": 0, 00:18:55.802 "pending_bdev_io": 0, 00:18:55.802 "completed_nvme_io": 0, 00:18:55.802 "transports": [] 00:18:55.802 }, 00:18:55.802 { 00:18:55.802 "name": "nvmf_tgt_poll_group_003", 00:18:55.802 "admin_qpairs": 0, 00:18:55.802 "io_qpairs": 0, 00:18:55.802 "current_admin_qpairs": 0, 00:18:55.802 "current_io_qpairs": 0, 00:18:55.802 "pending_bdev_io": 0, 00:18:55.802 "completed_nvme_io": 0, 00:18:55.802 "transports": [] 00:18:55.802 } 00:18:55.802 ] 00:18:55.802 }' 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:55.803 [2024-11-20 17:45:55.597576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:18:55.803 "tick_rate": 2400000000, 00:18:55.803 "poll_groups": [ 00:18:55.803 { 00:18:55.803 "name": "nvmf_tgt_poll_group_000", 00:18:55.803 "admin_qpairs": 0, 00:18:55.803 "io_qpairs": 0, 00:18:55.803 "current_admin_qpairs": 0, 00:18:55.803 "current_io_qpairs": 0, 00:18:55.803 "pending_bdev_io": 0, 00:18:55.803 "completed_nvme_io": 0, 00:18:55.803 "transports": [ 00:18:55.803 { 00:18:55.803 "trtype": "TCP" 00:18:55.803 } 00:18:55.803 ] 00:18:55.803 }, 00:18:55.803 { 00:18:55.803 "name": "nvmf_tgt_poll_group_001", 00:18:55.803 "admin_qpairs": 0, 00:18:55.803 "io_qpairs": 0, 00:18:55.803 "current_admin_qpairs": 0, 00:18:55.803 "current_io_qpairs": 0, 00:18:55.803 "pending_bdev_io": 0, 00:18:55.803 "completed_nvme_io": 0, 00:18:55.803 "transports": [ 00:18:55.803 { 00:18:55.803 "trtype": "TCP" 00:18:55.803 } 00:18:55.803 ] 00:18:55.803 }, 00:18:55.803 { 00:18:55.803 "name": "nvmf_tgt_poll_group_002", 00:18:55.803 "admin_qpairs": 0, 00:18:55.803 "io_qpairs": 0, 00:18:55.803 "current_admin_qpairs": 0, 00:18:55.803 "current_io_qpairs": 0, 00:18:55.803 "pending_bdev_io": 0, 00:18:55.803 "completed_nvme_io": 0, 00:18:55.803 "transports": [ 00:18:55.803 { 00:18:55.803 "trtype": "TCP" 00:18:55.803 } 00:18:55.803 ] 00:18:55.803 }, 00:18:55.803 { 00:18:55.803 "name": "nvmf_tgt_poll_group_003", 00:18:55.803 "admin_qpairs": 0, 00:18:55.803 "io_qpairs": 0, 00:18:55.803 "current_admin_qpairs": 0, 00:18:55.803 "current_io_qpairs": 0, 00:18:55.803 "pending_bdev_io": 0, 00:18:55.803 "completed_nvme_io": 0, 00:18:55.803 "transports": [ 00:18:55.803 { 00:18:55.803 "trtype": "TCP" 00:18:55.803 } 00:18:55.803 ] 00:18:55.803 } 00:18:55.803 ] 00:18:55.803 }' 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:55.803 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:56.064 Malloc1 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:56.064 [2024-11-20 17:45:55.795677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:18:56.064 [2024-11-20 17:45:55.832712] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:18:56.064 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:56.064 could not add new controller: failed to write to nvme-fabrics device 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.064 17:45:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:57.447 17:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:57.447 17:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:57.447 17:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:57.447 17:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:57.447 17:45:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:59.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:59.989 [2024-11-20 17:45:59.668666] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:18:59.989 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:59.989 could not add new controller: failed to write to nvme-fabrics device 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.989 17:45:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:01.373 17:46:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:19:01.373 17:46:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:01.373 17:46:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:01.373 17:46:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:01.373 17:46:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:03.286 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:03.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:03.547 [2024-11-20 17:46:03.398800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.547 17:46:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:05.460 17:46:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:05.460 17:46:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:05.460 17:46:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:05.460 17:46:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:05.460 17:46:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:07.372 17:46:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:07.373 17:46:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:07.373 17:46:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:07.373 17:46:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:07.373 17:46:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:07.373 17:46:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:07.373 17:46:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:07.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.373 [2024-11-20 17:46:07.126013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.373 17:46:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:09.287 17:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:09.287 17:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:09.287 17:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:09.287 17:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:09.287 17:46:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:11.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.199 [2024-11-20 17:46:10.889105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.199 17:46:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:12.583 17:46:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:12.583 17:46:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:12.583 17:46:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:12.583 17:46:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:12.583 17:46:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:15.128 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:15.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.129 [2024-11-20 17:46:14.644133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.129 17:46:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:16.513 17:46:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:16.513 17:46:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:16.513 17:46:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:16.513 17:46:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:16.513 17:46:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:18.425 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:18.425 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:18.425 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:18.425 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:18.425 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:18.425 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:18.425 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:18.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:18.425 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:18.425 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:18.425 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:18.425 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:18.686 [2024-11-20 17:46:18.408847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.686 17:46:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:20.070 17:46:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:20.070 17:46:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:20.070 17:46:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:20.070 17:46:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:20.070 17:46:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:22.616 17:46:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:22.616 17:46:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:22.616 17:46:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:22.616 17:46:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:22.616 17:46:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:22.616 17:46:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:22.616 17:46:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:22.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.616 [2024-11-20 17:46:22.148612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.616 [2024-11-20 17:46:22.216793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.616 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 [2024-11-20 17:46:22.284997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 [2024-11-20 17:46:22.353196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 [2024-11-20 17:46:22.421412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.617 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.618 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.618 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:19:22.618 "tick_rate": 2400000000, 00:19:22.618 "poll_groups": [ 00:19:22.618 { 00:19:22.618 "name": "nvmf_tgt_poll_group_000", 00:19:22.618 "admin_qpairs": 0, 00:19:22.618 "io_qpairs": 224, 00:19:22.618 "current_admin_qpairs": 0, 00:19:22.618 "current_io_qpairs": 0, 00:19:22.618 "pending_bdev_io": 0, 00:19:22.618 "completed_nvme_io": 274, 00:19:22.618 "transports": [ 00:19:22.618 { 00:19:22.618 "trtype": "TCP" 00:19:22.618 } 00:19:22.618 ] 00:19:22.618 }, 00:19:22.618 { 00:19:22.618 "name": "nvmf_tgt_poll_group_001", 00:19:22.618 "admin_qpairs": 1, 00:19:22.618 "io_qpairs": 223, 00:19:22.618 "current_admin_qpairs": 0, 00:19:22.618 "current_io_qpairs": 0, 00:19:22.618 "pending_bdev_io": 0, 00:19:22.618 "completed_nvme_io": 517, 00:19:22.618 "transports": [ 00:19:22.618 { 00:19:22.618 "trtype": "TCP" 00:19:22.618 } 00:19:22.618 ] 00:19:22.618 }, 00:19:22.618 { 00:19:22.618 "name": "nvmf_tgt_poll_group_002", 00:19:22.618 "admin_qpairs": 6, 00:19:22.618 "io_qpairs": 218, 00:19:22.618 "current_admin_qpairs": 0, 00:19:22.618 "current_io_qpairs": 0, 00:19:22.618 "pending_bdev_io": 0, 00:19:22.618 "completed_nvme_io": 223, 00:19:22.618 "transports": [ 00:19:22.618 { 00:19:22.618 "trtype": "TCP" 00:19:22.618 } 00:19:22.618 ] 00:19:22.618 }, 00:19:22.618 { 00:19:22.618 "name": "nvmf_tgt_poll_group_003", 00:19:22.618 "admin_qpairs": 0, 00:19:22.618 "io_qpairs": 224, 00:19:22.618 "current_admin_qpairs": 0, 00:19:22.618 "current_io_qpairs": 0, 00:19:22.618 "pending_bdev_io": 0, 00:19:22.618 "completed_nvme_io": 225, 00:19:22.618 "transports": [ 00:19:22.618 { 00:19:22.618 "trtype": "TCP" 00:19:22.618 } 00:19:22.618 ] 00:19:22.618 } 00:19:22.618 ] 00:19:22.618 }' 00:19:22.618 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:19:22.618 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:22.618 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:22.618 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:22.880 rmmod nvme_tcp 00:19:22.880 rmmod nvme_fabrics 00:19:22.880 rmmod nvme_keyring 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 2618631 ']' 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 2618631 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2618631 ']' 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2618631 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2618631 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2618631' 00:19:22.880 killing process with pid 2618631 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2618631 00:19:22.880 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2618631 00:19:23.141 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:23.141 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:23.141 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:23.141 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:19:23.141 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:19:23.141 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:23.141 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:19:23.141 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:23.141 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:23.141 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.141 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:23.141 17:46:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.052 17:46:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:25.053 00:19:25.053 real 0m38.195s 00:19:25.053 user 1m54.293s 00:19:25.053 sys 0m7.937s 00:19:25.053 17:46:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:25.053 17:46:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:25.053 ************************************ 00:19:25.053 END TEST nvmf_rpc 00:19:25.053 ************************************ 00:19:25.315 17:46:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:19:25.315 17:46:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:25.315 17:46:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:25.315 17:46:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:25.315 ************************************ 00:19:25.315 START TEST nvmf_invalid 00:19:25.315 ************************************ 00:19:25.315 17:46:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:19:25.315 * Looking for test storage... 00:19:25.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:25.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.315 --rc genhtml_branch_coverage=1 00:19:25.315 --rc genhtml_function_coverage=1 00:19:25.315 --rc genhtml_legend=1 00:19:25.315 --rc geninfo_all_blocks=1 00:19:25.315 --rc geninfo_unexecuted_blocks=1 00:19:25.315 00:19:25.315 ' 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:25.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.315 --rc genhtml_branch_coverage=1 00:19:25.315 --rc genhtml_function_coverage=1 00:19:25.315 --rc genhtml_legend=1 00:19:25.315 --rc geninfo_all_blocks=1 00:19:25.315 --rc geninfo_unexecuted_blocks=1 00:19:25.315 00:19:25.315 ' 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:25.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.315 --rc genhtml_branch_coverage=1 00:19:25.315 --rc genhtml_function_coverage=1 00:19:25.315 --rc genhtml_legend=1 00:19:25.315 --rc geninfo_all_blocks=1 00:19:25.315 --rc geninfo_unexecuted_blocks=1 00:19:25.315 00:19:25.315 ' 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:25.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.315 --rc genhtml_branch_coverage=1 00:19:25.315 --rc genhtml_function_coverage=1 00:19:25.315 --rc genhtml_legend=1 00:19:25.315 --rc geninfo_all_blocks=1 00:19:25.315 --rc geninfo_unexecuted_blocks=1 00:19:25.315 00:19:25.315 ' 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.315 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:25.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.316 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.577 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:19:25.577 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:19:25.577 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:19:25.577 17:46:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:33.719 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:33.720 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:33.720 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:33.720 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:33.720 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # is_hw=yes 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:33.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.529 ms 00:19:33.720 00:19:33.720 --- 10.0.0.2 ping statistics --- 00:19:33.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.720 rtt min/avg/max/mdev = 0.529/0.529/0.529/0.000 ms 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:33.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:19:33.720 00:19:33.720 --- 10.0.0.1 ping statistics --- 00:19:33.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.720 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # return 0 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=2628372 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 2628372 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2628372 ']' 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:33.720 17:46:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:33.720 [2024-11-20 17:46:32.835800] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:33.720 [2024-11-20 17:46:32.835865] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.720 [2024-11-20 17:46:32.923342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:33.720 [2024-11-20 17:46:32.971930] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.721 [2024-11-20 17:46:32.971981] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.721 [2024-11-20 17:46:32.971995] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.721 [2024-11-20 17:46:32.972006] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.721 [2024-11-20 17:46:32.972014] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.721 [2024-11-20 17:46:32.972195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.721 [2024-11-20 17:46:32.972351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.721 [2024-11-20 17:46:32.972637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:33.721 [2024-11-20 17:46:32.972639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.982 17:46:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:33.982 17:46:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:19:33.982 17:46:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:33.982 17:46:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:33.982 17:46:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:33.982 17:46:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.982 17:46:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:33.982 17:46:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3299 00:19:33.982 [2024-11-20 17:46:33.875911] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:19:34.245 17:46:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:19:34.245 { 00:19:34.245 "nqn": "nqn.2016-06.io.spdk:cnode3299", 00:19:34.245 "tgt_name": "foobar", 00:19:34.245 "method": "nvmf_create_subsystem", 00:19:34.245 "req_id": 1 00:19:34.245 } 00:19:34.245 Got JSON-RPC error response 00:19:34.245 response: 00:19:34.245 { 00:19:34.245 "code": -32603, 00:19:34.245 "message": "Unable to find target foobar" 00:19:34.245 }' 00:19:34.245 17:46:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:19:34.245 { 00:19:34.245 "nqn": "nqn.2016-06.io.spdk:cnode3299", 00:19:34.245 "tgt_name": "foobar", 00:19:34.245 "method": "nvmf_create_subsystem", 00:19:34.245 "req_id": 1 00:19:34.245 } 00:19:34.245 Got JSON-RPC error response 00:19:34.245 response: 00:19:34.245 { 00:19:34.245 "code": -32603, 00:19:34.245 "message": "Unable to find target foobar" 00:19:34.245 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:19:34.245 17:46:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:19:34.245 17:46:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1489 00:19:34.245 [2024-11-20 17:46:34.084806] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1489: invalid serial number 'SPDKISFASTANDAWESOME' 00:19:34.245 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:19:34.245 { 00:19:34.245 "nqn": "nqn.2016-06.io.spdk:cnode1489", 00:19:34.245 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:34.245 "method": "nvmf_create_subsystem", 00:19:34.245 "req_id": 1 00:19:34.245 } 00:19:34.245 Got JSON-RPC error response 00:19:34.245 response: 00:19:34.245 { 00:19:34.245 "code": -32602, 00:19:34.245 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:34.245 }' 00:19:34.245 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:19:34.245 { 00:19:34.245 "nqn": "nqn.2016-06.io.spdk:cnode1489", 00:19:34.245 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:34.245 "method": "nvmf_create_subsystem", 00:19:34.245 "req_id": 1 00:19:34.245 } 00:19:34.245 Got JSON-RPC error response 00:19:34.245 response: 00:19:34.245 { 00:19:34.245 "code": -32602, 00:19:34.245 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:34.245 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:34.245 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:19:34.245 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6388 00:19:34.507 [2024-11-20 17:46:34.293556] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6388: invalid model number 'SPDK_Controller' 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:19:34.507 { 00:19:34.507 "nqn": "nqn.2016-06.io.spdk:cnode6388", 00:19:34.507 "model_number": "SPDK_Controller\u001f", 00:19:34.507 "method": "nvmf_create_subsystem", 00:19:34.507 "req_id": 1 00:19:34.507 } 00:19:34.507 Got JSON-RPC error response 00:19:34.507 response: 00:19:34.507 { 00:19:34.507 "code": -32602, 00:19:34.507 "message": "Invalid MN SPDK_Controller\u001f" 00:19:34.507 }' 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:19:34.507 { 00:19:34.507 "nqn": "nqn.2016-06.io.spdk:cnode6388", 00:19:34.507 "model_number": "SPDK_Controller\u001f", 00:19:34.507 "method": "nvmf_create_subsystem", 00:19:34.507 "req_id": 1 00:19:34.507 } 00:19:34.507 Got JSON-RPC error response 00:19:34.507 response: 00:19:34.507 { 00:19:34.507 "code": -32602, 00:19:34.507 "message": "Invalid MN SPDK_Controller\u001f" 00:19:34.507 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:19:34.507 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:19:34.508 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.508 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.508 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:19:34.508 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:19:34.508 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:19:34.508 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.508 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.508 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:19:34.508 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:19:34.508 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:19:34.508 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ : == \- ]] 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ':.tF?gBZ~Dh+)^p-6t$t1' 00:19:34.769 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ':.tF?gBZ~Dh+)^p-6t$t1' nqn.2016-06.io.spdk:cnode18306 00:19:34.769 [2024-11-20 17:46:34.679038] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18306: invalid serial number ':.tF?gBZ~Dh+)^p-6t$t1' 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:19:35.036 { 00:19:35.036 "nqn": "nqn.2016-06.io.spdk:cnode18306", 00:19:35.036 "serial_number": ":.tF?gBZ~Dh+)^p-6t$t1", 00:19:35.036 "method": "nvmf_create_subsystem", 00:19:35.036 "req_id": 1 00:19:35.036 } 00:19:35.036 Got JSON-RPC error response 00:19:35.036 response: 00:19:35.036 { 00:19:35.036 "code": -32602, 00:19:35.036 "message": "Invalid SN :.tF?gBZ~Dh+)^p-6t$t1" 00:19:35.036 }' 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:19:35.036 { 00:19:35.036 "nqn": "nqn.2016-06.io.spdk:cnode18306", 00:19:35.036 "serial_number": ":.tF?gBZ~Dh+)^p-6t$t1", 00:19:35.036 "method": "nvmf_create_subsystem", 00:19:35.036 "req_id": 1 00:19:35.036 } 00:19:35.036 Got JSON-RPC error response 00:19:35.036 response: 00:19:35.036 { 00:19:35.036 "code": -32602, 00:19:35.036 "message": "Invalid SN :.tF?gBZ~Dh+)^p-6t$t1" 00:19:35.036 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.036 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:19:35.037 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.038 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.038 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:19:35.038 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:19:35.038 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:19:35.038 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.038 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.038 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:19:35.038 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:19:35.038 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:19:35.038 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.038 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.038 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:19:35.038 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:19:35.038 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:19:35.038 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.038 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:19:35.335 17:46:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ == \- ]] 00:19:35.335 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ' S16)*vbRql>g}HE~xC&+XmSir}Lb%)ppnH;-/fy2' 00:19:35.336 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ' S16)*vbRql>g}HE~xC&+XmSir}Lb%)ppnH;-/fy2' nqn.2016-06.io.spdk:cnode29775 00:19:35.336 [2024-11-20 17:46:35.229184] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29775: invalid model number ' S16)*vbRql>g}HE~xC&+XmSir}Lb%)ppnH;-/fy2' 00:19:35.691 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:19:35.691 { 00:19:35.691 "nqn": "nqn.2016-06.io.spdk:cnode29775", 00:19:35.691 "model_number": " S16)*vbRql>g}HE~xC&+XmSir}Lb%)ppnH;-/fy2", 00:19:35.691 "method": "nvmf_create_subsystem", 00:19:35.691 "req_id": 1 00:19:35.691 } 00:19:35.691 Got JSON-RPC error response 00:19:35.691 response: 00:19:35.691 { 00:19:35.691 "code": -32602, 00:19:35.691 "message": "Invalid MN S16)*vbRql>g}HE~xC&+XmSir}Lb%)ppnH;-/fy2" 00:19:35.691 }' 00:19:35.691 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:19:35.691 { 00:19:35.691 "nqn": "nqn.2016-06.io.spdk:cnode29775", 00:19:35.691 "model_number": " S16)*vbRql>g}HE~xC&+XmSir}Lb%)ppnH;-/fy2", 00:19:35.691 "method": "nvmf_create_subsystem", 00:19:35.691 "req_id": 1 00:19:35.691 } 00:19:35.691 Got JSON-RPC error response 00:19:35.691 response: 00:19:35.691 { 00:19:35.691 "code": -32602, 00:19:35.691 "message": "Invalid MN S16)*vbRql>g}HE~xC&+XmSir}Lb%)ppnH;-/fy2" 00:19:35.691 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:35.691 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:19:35.691 [2024-11-20 17:46:35.417884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.691 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:19:35.953 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:19:35.953 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:19:35.953 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:19:35.953 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:19:35.953 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:19:35.953 [2024-11-20 17:46:35.800551] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:19:35.953 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:19:35.953 { 00:19:35.953 "nqn": "nqn.2016-06.io.spdk:cnode", 00:19:35.953 "listen_address": { 00:19:35.953 "trtype": "tcp", 00:19:35.953 "traddr": "", 00:19:35.953 "trsvcid": "4421" 00:19:35.953 }, 00:19:35.953 "method": "nvmf_subsystem_remove_listener", 00:19:35.953 "req_id": 1 00:19:35.953 } 00:19:35.953 Got JSON-RPC error response 00:19:35.953 response: 00:19:35.953 { 00:19:35.953 "code": -32602, 00:19:35.953 "message": "Invalid parameters" 00:19:35.953 }' 00:19:35.953 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:19:35.953 { 00:19:35.953 "nqn": "nqn.2016-06.io.spdk:cnode", 00:19:35.953 "listen_address": { 00:19:35.953 "trtype": "tcp", 00:19:35.953 "traddr": "", 00:19:35.953 "trsvcid": "4421" 00:19:35.953 }, 00:19:35.953 "method": "nvmf_subsystem_remove_listener", 00:19:35.953 "req_id": 1 00:19:35.953 } 00:19:35.953 Got JSON-RPC error response 00:19:35.953 response: 00:19:35.953 { 00:19:35.953 "code": -32602, 00:19:35.953 "message": "Invalid parameters" 00:19:35.953 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:19:35.953 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5476 -i 0 00:19:36.214 [2024-11-20 17:46:35.989161] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5476: invalid cntlid range [0-65519] 00:19:36.214 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:19:36.214 { 00:19:36.214 "nqn": "nqn.2016-06.io.spdk:cnode5476", 00:19:36.214 "min_cntlid": 0, 00:19:36.214 "method": "nvmf_create_subsystem", 00:19:36.214 "req_id": 1 00:19:36.214 } 00:19:36.214 Got JSON-RPC error response 00:19:36.214 response: 00:19:36.214 { 00:19:36.214 "code": -32602, 00:19:36.214 "message": "Invalid cntlid range [0-65519]" 00:19:36.214 }' 00:19:36.214 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:19:36.214 { 00:19:36.214 "nqn": "nqn.2016-06.io.spdk:cnode5476", 00:19:36.214 "min_cntlid": 0, 00:19:36.214 "method": "nvmf_create_subsystem", 00:19:36.214 "req_id": 1 00:19:36.214 } 00:19:36.214 Got JSON-RPC error response 00:19:36.214 response: 00:19:36.214 { 00:19:36.214 "code": -32602, 00:19:36.214 "message": "Invalid cntlid range [0-65519]" 00:19:36.214 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:36.214 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29245 -i 65520 00:19:36.476 [2024-11-20 17:46:36.173715] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29245: invalid cntlid range [65520-65519] 00:19:36.476 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:19:36.476 { 00:19:36.476 "nqn": "nqn.2016-06.io.spdk:cnode29245", 00:19:36.476 "min_cntlid": 65520, 00:19:36.476 "method": "nvmf_create_subsystem", 00:19:36.476 "req_id": 1 00:19:36.476 } 00:19:36.476 Got JSON-RPC error response 00:19:36.476 response: 00:19:36.476 { 00:19:36.476 "code": -32602, 00:19:36.476 "message": "Invalid cntlid range [65520-65519]" 00:19:36.476 }' 00:19:36.476 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:19:36.476 { 00:19:36.476 "nqn": "nqn.2016-06.io.spdk:cnode29245", 00:19:36.476 "min_cntlid": 65520, 00:19:36.476 "method": "nvmf_create_subsystem", 00:19:36.476 "req_id": 1 00:19:36.476 } 00:19:36.476 Got JSON-RPC error response 00:19:36.476 response: 00:19:36.476 { 00:19:36.476 "code": -32602, 00:19:36.476 "message": "Invalid cntlid range [65520-65519]" 00:19:36.476 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:36.476 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12425 -I 0 00:19:36.476 [2024-11-20 17:46:36.362321] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12425: invalid cntlid range [1-0] 00:19:36.737 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:19:36.737 { 00:19:36.737 "nqn": "nqn.2016-06.io.spdk:cnode12425", 00:19:36.737 "max_cntlid": 0, 00:19:36.737 "method": "nvmf_create_subsystem", 00:19:36.737 "req_id": 1 00:19:36.737 } 00:19:36.737 Got JSON-RPC error response 00:19:36.737 response: 00:19:36.737 { 00:19:36.737 "code": -32602, 00:19:36.737 "message": "Invalid cntlid range [1-0]" 00:19:36.737 }' 00:19:36.737 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:19:36.737 { 00:19:36.737 "nqn": "nqn.2016-06.io.spdk:cnode12425", 00:19:36.737 "max_cntlid": 0, 00:19:36.737 "method": "nvmf_create_subsystem", 00:19:36.737 "req_id": 1 00:19:36.737 } 00:19:36.737 Got JSON-RPC error response 00:19:36.737 response: 00:19:36.737 { 00:19:36.737 "code": -32602, 00:19:36.737 "message": "Invalid cntlid range [1-0]" 00:19:36.737 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:36.737 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24044 -I 65520 00:19:36.737 [2024-11-20 17:46:36.554965] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24044: invalid cntlid range [1-65520] 00:19:36.737 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:19:36.737 { 00:19:36.737 "nqn": "nqn.2016-06.io.spdk:cnode24044", 00:19:36.737 "max_cntlid": 65520, 00:19:36.737 "method": "nvmf_create_subsystem", 00:19:36.737 "req_id": 1 00:19:36.737 } 00:19:36.737 Got JSON-RPC error response 00:19:36.737 response: 00:19:36.737 { 00:19:36.737 "code": -32602, 00:19:36.737 "message": "Invalid cntlid range [1-65520]" 00:19:36.737 }' 00:19:36.737 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:19:36.737 { 00:19:36.737 "nqn": "nqn.2016-06.io.spdk:cnode24044", 00:19:36.737 "max_cntlid": 65520, 00:19:36.737 "method": "nvmf_create_subsystem", 00:19:36.737 "req_id": 1 00:19:36.737 } 00:19:36.737 Got JSON-RPC error response 00:19:36.737 response: 00:19:36.737 { 00:19:36.737 "code": -32602, 00:19:36.737 "message": "Invalid cntlid range [1-65520]" 00:19:36.737 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:36.737 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9489 -i 6 -I 5 00:19:36.998 [2024-11-20 17:46:36.739552] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9489: invalid cntlid range [6-5] 00:19:36.998 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:19:36.998 { 00:19:36.998 "nqn": "nqn.2016-06.io.spdk:cnode9489", 00:19:36.998 "min_cntlid": 6, 00:19:36.998 "max_cntlid": 5, 00:19:36.998 "method": "nvmf_create_subsystem", 00:19:36.998 "req_id": 1 00:19:36.998 } 00:19:36.998 Got JSON-RPC error response 00:19:36.998 response: 00:19:36.998 { 00:19:36.998 "code": -32602, 00:19:36.998 "message": "Invalid cntlid range [6-5]" 00:19:36.998 }' 00:19:36.998 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:19:36.998 { 00:19:36.998 "nqn": "nqn.2016-06.io.spdk:cnode9489", 00:19:36.998 "min_cntlid": 6, 00:19:36.998 "max_cntlid": 5, 00:19:36.998 "method": "nvmf_create_subsystem", 00:19:36.998 "req_id": 1 00:19:36.998 } 00:19:36.998 Got JSON-RPC error response 00:19:36.998 response: 00:19:36.998 { 00:19:36.998 "code": -32602, 00:19:36.998 "message": "Invalid cntlid range [6-5]" 00:19:36.998 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:36.998 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:19:36.998 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:19:36.998 { 00:19:36.998 "name": "foobar", 00:19:36.998 "method": "nvmf_delete_target", 00:19:36.998 "req_id": 1 00:19:36.998 } 00:19:36.998 Got JSON-RPC error response 00:19:36.998 response: 00:19:36.998 { 00:19:36.998 "code": -32602, 00:19:36.998 "message": "The specified target doesn'\''t exist, cannot delete it." 00:19:36.998 }' 00:19:36.998 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:19:36.998 { 00:19:36.998 "name": "foobar", 00:19:36.998 "method": "nvmf_delete_target", 00:19:36.998 "req_id": 1 00:19:36.998 } 00:19:36.998 Got JSON-RPC error response 00:19:36.998 response: 00:19:36.998 { 00:19:36.998 "code": -32602, 00:19:36.998 "message": "The specified target doesn't exist, cannot delete it." 00:19:36.998 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:19:36.998 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:19:36.998 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:19:36.998 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:36.998 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:19:36.998 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:36.998 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:19:36.998 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:36.998 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:36.998 rmmod nvme_tcp 00:19:36.998 rmmod nvme_fabrics 00:19:36.998 rmmod nvme_keyring 00:19:37.258 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:37.258 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:19:37.258 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:19:37.258 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 2628372 ']' 00:19:37.258 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 2628372 00:19:37.258 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2628372 ']' 00:19:37.258 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2628372 00:19:37.258 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:19:37.258 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.258 17:46:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2628372 00:19:37.258 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:37.258 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:37.258 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2628372' 00:19:37.258 killing process with pid 2628372 00:19:37.258 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2628372 00:19:37.258 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2628372 00:19:37.258 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:37.258 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:37.258 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:37.258 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:19:37.258 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-save 00:19:37.258 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:37.258 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-restore 00:19:37.258 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:37.258 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:37.258 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.258 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.258 17:46:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:39.805 00:19:39.805 real 0m14.221s 00:19:39.805 user 0m21.227s 00:19:39.805 sys 0m6.698s 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:39.805 ************************************ 00:19:39.805 END TEST nvmf_invalid 00:19:39.805 ************************************ 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:39.805 ************************************ 00:19:39.805 START TEST nvmf_connect_stress 00:19:39.805 ************************************ 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:19:39.805 * Looking for test storage... 00:19:39.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:39.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.805 --rc genhtml_branch_coverage=1 00:19:39.805 --rc genhtml_function_coverage=1 00:19:39.805 --rc genhtml_legend=1 00:19:39.805 --rc geninfo_all_blocks=1 00:19:39.805 --rc geninfo_unexecuted_blocks=1 00:19:39.805 00:19:39.805 ' 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:39.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.805 --rc genhtml_branch_coverage=1 00:19:39.805 --rc genhtml_function_coverage=1 00:19:39.805 --rc genhtml_legend=1 00:19:39.805 --rc geninfo_all_blocks=1 00:19:39.805 --rc geninfo_unexecuted_blocks=1 00:19:39.805 00:19:39.805 ' 00:19:39.805 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:39.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.805 --rc genhtml_branch_coverage=1 00:19:39.805 --rc genhtml_function_coverage=1 00:19:39.805 --rc genhtml_legend=1 00:19:39.805 --rc geninfo_all_blocks=1 00:19:39.806 --rc geninfo_unexecuted_blocks=1 00:19:39.806 00:19:39.806 ' 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:39.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.806 --rc genhtml_branch_coverage=1 00:19:39.806 --rc genhtml_function_coverage=1 00:19:39.806 --rc genhtml_legend=1 00:19:39.806 --rc geninfo_all_blocks=1 00:19:39.806 --rc geninfo_unexecuted_blocks=1 00:19:39.806 00:19:39.806 ' 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:39.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:19:39.806 17:46:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:47.948 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:47.948 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:47.948 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:47.949 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:47.949 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:47.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:19:47.949 00:19:47.949 --- 10.0.0.2 ping statistics --- 00:19:47.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.949 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:47.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:19:47.949 00:19:47.949 --- 10.0.0.1 ping statistics --- 00:19:47.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.949 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # return 0 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:19:47.949 17:46:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:47.949 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:47.949 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:47.949 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=2633520 00:19:47.949 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 2633520 00:19:47.949 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:47.949 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2633520 ']' 00:19:47.949 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.949 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:47.949 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.949 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:47.949 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:47.949 [2024-11-20 17:46:47.074296] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:47.949 [2024-11-20 17:46:47.074379] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.949 [2024-11-20 17:46:47.163624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:47.949 [2024-11-20 17:46:47.211502] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.949 [2024-11-20 17:46:47.211556] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.949 [2024-11-20 17:46:47.211565] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.949 [2024-11-20 17:46:47.211573] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.949 [2024-11-20 17:46:47.211579] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.949 [2024-11-20 17:46:47.211760] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.949 [2024-11-20 17:46:47.211920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.949 [2024-11-20 17:46:47.211921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:48.212 [2024-11-20 17:46:47.942883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:48.212 [2024-11-20 17:46:47.981878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:48.212 NULL1 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2633621 00:19:48.212 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:48.212 17:46:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:19:48.212 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:48.212 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:19:48.212 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.212 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.213 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:48.785 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.785 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:48.785 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:48.785 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.785 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:49.046 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.046 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:49.046 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:49.046 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.046 17:46:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:49.306 17:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.306 17:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:49.306 17:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:49.306 17:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.306 17:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:49.567 17:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.567 17:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:49.567 17:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:49.567 17:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.567 17:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:49.828 17:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.090 17:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:50.090 17:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:50.090 17:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.090 17:46:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:50.352 17:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.352 17:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:50.352 17:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:50.352 17:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.352 17:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:50.612 17:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.612 17:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:50.612 17:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:50.612 17:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.612 17:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:50.873 17:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.873 17:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:50.873 17:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:50.873 17:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.873 17:46:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:51.133 17:46:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.133 17:46:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:51.134 17:46:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:51.134 17:46:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.134 17:46:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:51.703 17:46:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.703 17:46:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:51.703 17:46:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:51.703 17:46:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.703 17:46:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:51.963 17:46:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.963 17:46:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:51.963 17:46:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:51.963 17:46:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.963 17:46:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:52.224 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.224 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:52.224 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:52.224 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.224 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:52.486 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.486 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:52.486 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:52.486 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.486 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:53.057 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.057 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:53.057 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:53.057 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.057 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:53.318 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.318 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:53.318 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:53.318 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.318 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:53.578 17:46:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.578 17:46:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:53.578 17:46:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:53.578 17:46:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.578 17:46:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:53.839 17:46:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.839 17:46:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:53.839 17:46:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:53.839 17:46:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.839 17:46:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:54.099 17:46:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.099 17:46:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:54.099 17:46:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:54.099 17:46:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.099 17:46:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:54.670 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.670 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:54.670 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:54.670 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.670 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:54.930 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.930 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:54.930 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:54.930 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.930 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:55.190 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.190 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:55.190 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:55.190 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.190 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:55.451 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.451 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:55.451 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:55.451 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.451 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:55.711 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.711 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:55.711 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:55.711 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.711 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:56.281 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.281 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:56.281 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:56.281 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.281 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:56.541 17:46:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.541 17:46:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:56.541 17:46:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:56.541 17:46:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.541 17:46:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:56.801 17:46:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.801 17:46:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:56.801 17:46:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:56.801 17:46:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.801 17:46:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:57.062 17:46:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.062 17:46:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:57.062 17:46:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:57.062 17:46:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.062 17:46:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:57.322 17:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.322 17:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:57.322 17:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:57.322 17:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.322 17:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:57.890 17:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.890 17:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:57.890 17:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:57.890 17:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.890 17:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:58.150 17:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.150 17:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:58.150 17:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:58.150 17:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.150 17:46:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:58.412 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.412 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:58.412 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:58.412 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.412 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:58.412 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:58.672 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.672 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2633621 00:19:58.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2633621) - No such process 00:19:58.672 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2633621 00:19:58.672 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:58.672 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:58.672 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:19:58.672 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:58.672 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:19:58.673 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:58.673 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:19:58.673 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:58.673 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:58.673 rmmod nvme_tcp 00:19:58.673 rmmod nvme_fabrics 00:19:58.673 rmmod nvme_keyring 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 2633520 ']' 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 2633520 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2633520 ']' 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2633520 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2633520 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2633520' 00:19:58.933 killing process with pid 2633520 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2633520 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2633520 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:58.933 17:46:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.482 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:01.482 00:20:01.482 real 0m21.615s 00:20:01.482 user 0m43.382s 00:20:01.482 sys 0m9.329s 00:20:01.482 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:01.482 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:01.482 ************************************ 00:20:01.482 END TEST nvmf_connect_stress 00:20:01.482 ************************************ 00:20:01.482 17:47:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:01.482 17:47:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:01.482 17:47:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:01.482 17:47:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:01.482 ************************************ 00:20:01.482 START TEST nvmf_fused_ordering 00:20:01.482 ************************************ 00:20:01.482 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:01.482 * Looking for test storage... 00:20:01.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:01.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.482 --rc genhtml_branch_coverage=1 00:20:01.482 --rc genhtml_function_coverage=1 00:20:01.482 --rc genhtml_legend=1 00:20:01.482 --rc geninfo_all_blocks=1 00:20:01.482 --rc geninfo_unexecuted_blocks=1 00:20:01.482 00:20:01.482 ' 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:01.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.482 --rc genhtml_branch_coverage=1 00:20:01.482 --rc genhtml_function_coverage=1 00:20:01.482 --rc genhtml_legend=1 00:20:01.482 --rc geninfo_all_blocks=1 00:20:01.482 --rc geninfo_unexecuted_blocks=1 00:20:01.482 00:20:01.482 ' 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:01.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.482 --rc genhtml_branch_coverage=1 00:20:01.482 --rc genhtml_function_coverage=1 00:20:01.482 --rc genhtml_legend=1 00:20:01.482 --rc geninfo_all_blocks=1 00:20:01.482 --rc geninfo_unexecuted_blocks=1 00:20:01.482 00:20:01.482 ' 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:01.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.482 --rc genhtml_branch_coverage=1 00:20:01.482 --rc genhtml_function_coverage=1 00:20:01.482 --rc genhtml_legend=1 00:20:01.482 --rc geninfo_all_blocks=1 00:20:01.482 --rc geninfo_unexecuted_blocks=1 00:20:01.482 00:20:01.482 ' 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.482 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:01.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:20:01.483 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:09.623 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:09.623 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:09.624 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:09.624 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:09.624 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # is_hw=yes 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:09.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:20:09.624 00:20:09.624 --- 10.0.0.2 ping statistics --- 00:20:09.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.624 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:09.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:20:09.624 00:20:09.624 --- 10.0.0.1 ping statistics --- 00:20:09.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.624 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # return 0 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=2639845 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 2639845 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2639845 ']' 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:09.624 17:47:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:09.624 [2024-11-20 17:47:08.655748] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:09.624 [2024-11-20 17:47:08.655817] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.624 [2024-11-20 17:47:08.745805] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.624 [2024-11-20 17:47:08.792549] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.624 [2024-11-20 17:47:08.792604] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.624 [2024-11-20 17:47:08.792613] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.624 [2024-11-20 17:47:08.792620] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.624 [2024-11-20 17:47:08.792626] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.624 [2024-11-20 17:47:08.792656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.624 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:09.624 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:20:09.624 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:09.624 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:09.624 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:09.624 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.624 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:09.624 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.625 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:09.625 [2024-11-20 17:47:09.520988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.625 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.625 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:09.625 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.625 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:09.885 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.885 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:09.885 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.885 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:09.885 [2024-11-20 17:47:09.545256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.885 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.885 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:09.885 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.885 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:09.885 NULL1 00:20:09.885 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.885 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:20:09.885 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.885 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:09.885 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.885 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:20:09.885 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.886 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:09.886 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.886 17:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:09.886 [2024-11-20 17:47:09.613345] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:09.886 [2024-11-20 17:47:09.613390] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2640170 ] 00:20:10.145 Attached to nqn.2016-06.io.spdk:cnode1 00:20:10.145 Namespace ID: 1 size: 1GB 00:20:10.145 fused_ordering(0) 00:20:10.145 fused_ordering(1) 00:20:10.145 fused_ordering(2) 00:20:10.145 fused_ordering(3) 00:20:10.145 fused_ordering(4) 00:20:10.145 fused_ordering(5) 00:20:10.145 fused_ordering(6) 00:20:10.145 fused_ordering(7) 00:20:10.145 fused_ordering(8) 00:20:10.145 fused_ordering(9) 00:20:10.145 fused_ordering(10) 00:20:10.145 fused_ordering(11) 00:20:10.145 fused_ordering(12) 00:20:10.145 fused_ordering(13) 00:20:10.145 fused_ordering(14) 00:20:10.145 fused_ordering(15) 00:20:10.145 fused_ordering(16) 00:20:10.145 fused_ordering(17) 00:20:10.145 fused_ordering(18) 00:20:10.145 fused_ordering(19) 00:20:10.145 fused_ordering(20) 00:20:10.145 fused_ordering(21) 00:20:10.145 fused_ordering(22) 00:20:10.145 fused_ordering(23) 00:20:10.145 fused_ordering(24) 00:20:10.145 fused_ordering(25) 00:20:10.145 fused_ordering(26) 00:20:10.145 fused_ordering(27) 00:20:10.145 fused_ordering(28) 00:20:10.145 fused_ordering(29) 00:20:10.145 fused_ordering(30) 00:20:10.145 fused_ordering(31) 00:20:10.145 fused_ordering(32) 00:20:10.145 fused_ordering(33) 00:20:10.145 fused_ordering(34) 00:20:10.145 fused_ordering(35) 00:20:10.145 fused_ordering(36) 00:20:10.145 fused_ordering(37) 00:20:10.145 fused_ordering(38) 00:20:10.145 fused_ordering(39) 00:20:10.145 fused_ordering(40) 00:20:10.145 fused_ordering(41) 00:20:10.145 fused_ordering(42) 00:20:10.145 fused_ordering(43) 00:20:10.145 fused_ordering(44) 00:20:10.145 fused_ordering(45) 00:20:10.145 fused_ordering(46) 00:20:10.145 fused_ordering(47) 00:20:10.145 fused_ordering(48) 00:20:10.145 fused_ordering(49) 00:20:10.145 fused_ordering(50) 00:20:10.145 fused_ordering(51) 00:20:10.145 fused_ordering(52) 00:20:10.145 fused_ordering(53) 00:20:10.145 fused_ordering(54) 00:20:10.145 fused_ordering(55) 00:20:10.145 fused_ordering(56) 00:20:10.145 fused_ordering(57) 00:20:10.145 fused_ordering(58) 00:20:10.145 fused_ordering(59) 00:20:10.145 fused_ordering(60) 00:20:10.145 fused_ordering(61) 00:20:10.145 fused_ordering(62) 00:20:10.145 fused_ordering(63) 00:20:10.145 fused_ordering(64) 00:20:10.145 fused_ordering(65) 00:20:10.145 fused_ordering(66) 00:20:10.145 fused_ordering(67) 00:20:10.145 fused_ordering(68) 00:20:10.145 fused_ordering(69) 00:20:10.146 fused_ordering(70) 00:20:10.146 fused_ordering(71) 00:20:10.146 fused_ordering(72) 00:20:10.146 fused_ordering(73) 00:20:10.146 fused_ordering(74) 00:20:10.146 fused_ordering(75) 00:20:10.146 fused_ordering(76) 00:20:10.146 fused_ordering(77) 00:20:10.146 fused_ordering(78) 00:20:10.146 fused_ordering(79) 00:20:10.146 fused_ordering(80) 00:20:10.146 fused_ordering(81) 00:20:10.146 fused_ordering(82) 00:20:10.146 fused_ordering(83) 00:20:10.146 fused_ordering(84) 00:20:10.146 fused_ordering(85) 00:20:10.146 fused_ordering(86) 00:20:10.146 fused_ordering(87) 00:20:10.146 fused_ordering(88) 00:20:10.146 fused_ordering(89) 00:20:10.146 fused_ordering(90) 00:20:10.146 fused_ordering(91) 00:20:10.146 fused_ordering(92) 00:20:10.146 fused_ordering(93) 00:20:10.146 fused_ordering(94) 00:20:10.146 fused_ordering(95) 00:20:10.146 fused_ordering(96) 00:20:10.146 fused_ordering(97) 00:20:10.146 fused_ordering(98) 00:20:10.146 fused_ordering(99) 00:20:10.146 fused_ordering(100) 00:20:10.146 fused_ordering(101) 00:20:10.146 fused_ordering(102) 00:20:10.146 fused_ordering(103) 00:20:10.146 fused_ordering(104) 00:20:10.146 fused_ordering(105) 00:20:10.146 fused_ordering(106) 00:20:10.146 fused_ordering(107) 00:20:10.146 fused_ordering(108) 00:20:10.146 fused_ordering(109) 00:20:10.146 fused_ordering(110) 00:20:10.146 fused_ordering(111) 00:20:10.146 fused_ordering(112) 00:20:10.146 fused_ordering(113) 00:20:10.146 fused_ordering(114) 00:20:10.146 fused_ordering(115) 00:20:10.146 fused_ordering(116) 00:20:10.146 fused_ordering(117) 00:20:10.146 fused_ordering(118) 00:20:10.146 fused_ordering(119) 00:20:10.146 fused_ordering(120) 00:20:10.146 fused_ordering(121) 00:20:10.146 fused_ordering(122) 00:20:10.146 fused_ordering(123) 00:20:10.146 fused_ordering(124) 00:20:10.146 fused_ordering(125) 00:20:10.146 fused_ordering(126) 00:20:10.146 fused_ordering(127) 00:20:10.146 fused_ordering(128) 00:20:10.146 fused_ordering(129) 00:20:10.146 fused_ordering(130) 00:20:10.146 fused_ordering(131) 00:20:10.146 fused_ordering(132) 00:20:10.146 fused_ordering(133) 00:20:10.146 fused_ordering(134) 00:20:10.146 fused_ordering(135) 00:20:10.146 fused_ordering(136) 00:20:10.146 fused_ordering(137) 00:20:10.146 fused_ordering(138) 00:20:10.146 fused_ordering(139) 00:20:10.146 fused_ordering(140) 00:20:10.146 fused_ordering(141) 00:20:10.146 fused_ordering(142) 00:20:10.146 fused_ordering(143) 00:20:10.146 fused_ordering(144) 00:20:10.146 fused_ordering(145) 00:20:10.146 fused_ordering(146) 00:20:10.146 fused_ordering(147) 00:20:10.146 fused_ordering(148) 00:20:10.146 fused_ordering(149) 00:20:10.146 fused_ordering(150) 00:20:10.146 fused_ordering(151) 00:20:10.146 fused_ordering(152) 00:20:10.146 fused_ordering(153) 00:20:10.146 fused_ordering(154) 00:20:10.146 fused_ordering(155) 00:20:10.146 fused_ordering(156) 00:20:10.146 fused_ordering(157) 00:20:10.146 fused_ordering(158) 00:20:10.146 fused_ordering(159) 00:20:10.146 fused_ordering(160) 00:20:10.146 fused_ordering(161) 00:20:10.146 fused_ordering(162) 00:20:10.146 fused_ordering(163) 00:20:10.146 fused_ordering(164) 00:20:10.146 fused_ordering(165) 00:20:10.146 fused_ordering(166) 00:20:10.146 fused_ordering(167) 00:20:10.146 fused_ordering(168) 00:20:10.146 fused_ordering(169) 00:20:10.146 fused_ordering(170) 00:20:10.146 fused_ordering(171) 00:20:10.146 fused_ordering(172) 00:20:10.146 fused_ordering(173) 00:20:10.146 fused_ordering(174) 00:20:10.146 fused_ordering(175) 00:20:10.146 fused_ordering(176) 00:20:10.146 fused_ordering(177) 00:20:10.146 fused_ordering(178) 00:20:10.146 fused_ordering(179) 00:20:10.146 fused_ordering(180) 00:20:10.146 fused_ordering(181) 00:20:10.146 fused_ordering(182) 00:20:10.146 fused_ordering(183) 00:20:10.146 fused_ordering(184) 00:20:10.146 fused_ordering(185) 00:20:10.146 fused_ordering(186) 00:20:10.146 fused_ordering(187) 00:20:10.146 fused_ordering(188) 00:20:10.146 fused_ordering(189) 00:20:10.146 fused_ordering(190) 00:20:10.146 fused_ordering(191) 00:20:10.146 fused_ordering(192) 00:20:10.146 fused_ordering(193) 00:20:10.146 fused_ordering(194) 00:20:10.146 fused_ordering(195) 00:20:10.146 fused_ordering(196) 00:20:10.146 fused_ordering(197) 00:20:10.146 fused_ordering(198) 00:20:10.146 fused_ordering(199) 00:20:10.146 fused_ordering(200) 00:20:10.146 fused_ordering(201) 00:20:10.146 fused_ordering(202) 00:20:10.146 fused_ordering(203) 00:20:10.146 fused_ordering(204) 00:20:10.146 fused_ordering(205) 00:20:10.717 fused_ordering(206) 00:20:10.717 fused_ordering(207) 00:20:10.717 fused_ordering(208) 00:20:10.717 fused_ordering(209) 00:20:10.717 fused_ordering(210) 00:20:10.717 fused_ordering(211) 00:20:10.717 fused_ordering(212) 00:20:10.717 fused_ordering(213) 00:20:10.717 fused_ordering(214) 00:20:10.717 fused_ordering(215) 00:20:10.717 fused_ordering(216) 00:20:10.717 fused_ordering(217) 00:20:10.717 fused_ordering(218) 00:20:10.717 fused_ordering(219) 00:20:10.717 fused_ordering(220) 00:20:10.717 fused_ordering(221) 00:20:10.717 fused_ordering(222) 00:20:10.717 fused_ordering(223) 00:20:10.717 fused_ordering(224) 00:20:10.717 fused_ordering(225) 00:20:10.717 fused_ordering(226) 00:20:10.717 fused_ordering(227) 00:20:10.717 fused_ordering(228) 00:20:10.717 fused_ordering(229) 00:20:10.717 fused_ordering(230) 00:20:10.717 fused_ordering(231) 00:20:10.717 fused_ordering(232) 00:20:10.717 fused_ordering(233) 00:20:10.717 fused_ordering(234) 00:20:10.717 fused_ordering(235) 00:20:10.717 fused_ordering(236) 00:20:10.717 fused_ordering(237) 00:20:10.717 fused_ordering(238) 00:20:10.717 fused_ordering(239) 00:20:10.717 fused_ordering(240) 00:20:10.717 fused_ordering(241) 00:20:10.717 fused_ordering(242) 00:20:10.717 fused_ordering(243) 00:20:10.717 fused_ordering(244) 00:20:10.717 fused_ordering(245) 00:20:10.717 fused_ordering(246) 00:20:10.717 fused_ordering(247) 00:20:10.717 fused_ordering(248) 00:20:10.717 fused_ordering(249) 00:20:10.717 fused_ordering(250) 00:20:10.717 fused_ordering(251) 00:20:10.717 fused_ordering(252) 00:20:10.717 fused_ordering(253) 00:20:10.717 fused_ordering(254) 00:20:10.717 fused_ordering(255) 00:20:10.717 fused_ordering(256) 00:20:10.717 fused_ordering(257) 00:20:10.717 fused_ordering(258) 00:20:10.717 fused_ordering(259) 00:20:10.717 fused_ordering(260) 00:20:10.717 fused_ordering(261) 00:20:10.717 fused_ordering(262) 00:20:10.717 fused_ordering(263) 00:20:10.717 fused_ordering(264) 00:20:10.717 fused_ordering(265) 00:20:10.717 fused_ordering(266) 00:20:10.717 fused_ordering(267) 00:20:10.717 fused_ordering(268) 00:20:10.717 fused_ordering(269) 00:20:10.717 fused_ordering(270) 00:20:10.717 fused_ordering(271) 00:20:10.717 fused_ordering(272) 00:20:10.717 fused_ordering(273) 00:20:10.717 fused_ordering(274) 00:20:10.717 fused_ordering(275) 00:20:10.717 fused_ordering(276) 00:20:10.717 fused_ordering(277) 00:20:10.717 fused_ordering(278) 00:20:10.717 fused_ordering(279) 00:20:10.717 fused_ordering(280) 00:20:10.717 fused_ordering(281) 00:20:10.717 fused_ordering(282) 00:20:10.717 fused_ordering(283) 00:20:10.717 fused_ordering(284) 00:20:10.717 fused_ordering(285) 00:20:10.717 fused_ordering(286) 00:20:10.717 fused_ordering(287) 00:20:10.717 fused_ordering(288) 00:20:10.717 fused_ordering(289) 00:20:10.717 fused_ordering(290) 00:20:10.717 fused_ordering(291) 00:20:10.717 fused_ordering(292) 00:20:10.717 fused_ordering(293) 00:20:10.717 fused_ordering(294) 00:20:10.717 fused_ordering(295) 00:20:10.717 fused_ordering(296) 00:20:10.717 fused_ordering(297) 00:20:10.717 fused_ordering(298) 00:20:10.717 fused_ordering(299) 00:20:10.717 fused_ordering(300) 00:20:10.717 fused_ordering(301) 00:20:10.717 fused_ordering(302) 00:20:10.717 fused_ordering(303) 00:20:10.717 fused_ordering(304) 00:20:10.717 fused_ordering(305) 00:20:10.717 fused_ordering(306) 00:20:10.717 fused_ordering(307) 00:20:10.717 fused_ordering(308) 00:20:10.717 fused_ordering(309) 00:20:10.717 fused_ordering(310) 00:20:10.717 fused_ordering(311) 00:20:10.717 fused_ordering(312) 00:20:10.717 fused_ordering(313) 00:20:10.717 fused_ordering(314) 00:20:10.717 fused_ordering(315) 00:20:10.717 fused_ordering(316) 00:20:10.717 fused_ordering(317) 00:20:10.718 fused_ordering(318) 00:20:10.718 fused_ordering(319) 00:20:10.718 fused_ordering(320) 00:20:10.718 fused_ordering(321) 00:20:10.718 fused_ordering(322) 00:20:10.718 fused_ordering(323) 00:20:10.718 fused_ordering(324) 00:20:10.718 fused_ordering(325) 00:20:10.718 fused_ordering(326) 00:20:10.718 fused_ordering(327) 00:20:10.718 fused_ordering(328) 00:20:10.718 fused_ordering(329) 00:20:10.718 fused_ordering(330) 00:20:10.718 fused_ordering(331) 00:20:10.718 fused_ordering(332) 00:20:10.718 fused_ordering(333) 00:20:10.718 fused_ordering(334) 00:20:10.718 fused_ordering(335) 00:20:10.718 fused_ordering(336) 00:20:10.718 fused_ordering(337) 00:20:10.718 fused_ordering(338) 00:20:10.718 fused_ordering(339) 00:20:10.718 fused_ordering(340) 00:20:10.718 fused_ordering(341) 00:20:10.718 fused_ordering(342) 00:20:10.718 fused_ordering(343) 00:20:10.718 fused_ordering(344) 00:20:10.718 fused_ordering(345) 00:20:10.718 fused_ordering(346) 00:20:10.718 fused_ordering(347) 00:20:10.718 fused_ordering(348) 00:20:10.718 fused_ordering(349) 00:20:10.718 fused_ordering(350) 00:20:10.718 fused_ordering(351) 00:20:10.718 fused_ordering(352) 00:20:10.718 fused_ordering(353) 00:20:10.718 fused_ordering(354) 00:20:10.718 fused_ordering(355) 00:20:10.718 fused_ordering(356) 00:20:10.718 fused_ordering(357) 00:20:10.718 fused_ordering(358) 00:20:10.718 fused_ordering(359) 00:20:10.718 fused_ordering(360) 00:20:10.718 fused_ordering(361) 00:20:10.718 fused_ordering(362) 00:20:10.718 fused_ordering(363) 00:20:10.718 fused_ordering(364) 00:20:10.718 fused_ordering(365) 00:20:10.718 fused_ordering(366) 00:20:10.718 fused_ordering(367) 00:20:10.718 fused_ordering(368) 00:20:10.718 fused_ordering(369) 00:20:10.718 fused_ordering(370) 00:20:10.718 fused_ordering(371) 00:20:10.718 fused_ordering(372) 00:20:10.718 fused_ordering(373) 00:20:10.718 fused_ordering(374) 00:20:10.718 fused_ordering(375) 00:20:10.718 fused_ordering(376) 00:20:10.718 fused_ordering(377) 00:20:10.718 fused_ordering(378) 00:20:10.718 fused_ordering(379) 00:20:10.718 fused_ordering(380) 00:20:10.718 fused_ordering(381) 00:20:10.718 fused_ordering(382) 00:20:10.718 fused_ordering(383) 00:20:10.718 fused_ordering(384) 00:20:10.718 fused_ordering(385) 00:20:10.718 fused_ordering(386) 00:20:10.718 fused_ordering(387) 00:20:10.718 fused_ordering(388) 00:20:10.718 fused_ordering(389) 00:20:10.718 fused_ordering(390) 00:20:10.718 fused_ordering(391) 00:20:10.718 fused_ordering(392) 00:20:10.718 fused_ordering(393) 00:20:10.718 fused_ordering(394) 00:20:10.718 fused_ordering(395) 00:20:10.718 fused_ordering(396) 00:20:10.718 fused_ordering(397) 00:20:10.718 fused_ordering(398) 00:20:10.718 fused_ordering(399) 00:20:10.718 fused_ordering(400) 00:20:10.718 fused_ordering(401) 00:20:10.718 fused_ordering(402) 00:20:10.718 fused_ordering(403) 00:20:10.718 fused_ordering(404) 00:20:10.718 fused_ordering(405) 00:20:10.718 fused_ordering(406) 00:20:10.718 fused_ordering(407) 00:20:10.718 fused_ordering(408) 00:20:10.718 fused_ordering(409) 00:20:10.718 fused_ordering(410) 00:20:10.980 fused_ordering(411) 00:20:10.980 fused_ordering(412) 00:20:10.980 fused_ordering(413) 00:20:10.980 fused_ordering(414) 00:20:10.980 fused_ordering(415) 00:20:10.980 fused_ordering(416) 00:20:10.980 fused_ordering(417) 00:20:10.980 fused_ordering(418) 00:20:10.980 fused_ordering(419) 00:20:10.980 fused_ordering(420) 00:20:10.980 fused_ordering(421) 00:20:10.980 fused_ordering(422) 00:20:10.980 fused_ordering(423) 00:20:10.980 fused_ordering(424) 00:20:10.980 fused_ordering(425) 00:20:10.980 fused_ordering(426) 00:20:10.980 fused_ordering(427) 00:20:10.980 fused_ordering(428) 00:20:10.980 fused_ordering(429) 00:20:10.980 fused_ordering(430) 00:20:10.980 fused_ordering(431) 00:20:10.980 fused_ordering(432) 00:20:10.980 fused_ordering(433) 00:20:10.980 fused_ordering(434) 00:20:10.980 fused_ordering(435) 00:20:10.980 fused_ordering(436) 00:20:10.980 fused_ordering(437) 00:20:10.980 fused_ordering(438) 00:20:10.980 fused_ordering(439) 00:20:10.980 fused_ordering(440) 00:20:10.980 fused_ordering(441) 00:20:10.980 fused_ordering(442) 00:20:10.980 fused_ordering(443) 00:20:10.980 fused_ordering(444) 00:20:10.980 fused_ordering(445) 00:20:10.980 fused_ordering(446) 00:20:10.980 fused_ordering(447) 00:20:10.980 fused_ordering(448) 00:20:10.980 fused_ordering(449) 00:20:10.980 fused_ordering(450) 00:20:10.980 fused_ordering(451) 00:20:10.980 fused_ordering(452) 00:20:10.980 fused_ordering(453) 00:20:10.980 fused_ordering(454) 00:20:10.980 fused_ordering(455) 00:20:10.980 fused_ordering(456) 00:20:10.980 fused_ordering(457) 00:20:10.980 fused_ordering(458) 00:20:10.980 fused_ordering(459) 00:20:10.980 fused_ordering(460) 00:20:10.980 fused_ordering(461) 00:20:10.980 fused_ordering(462) 00:20:10.980 fused_ordering(463) 00:20:10.980 fused_ordering(464) 00:20:10.980 fused_ordering(465) 00:20:10.980 fused_ordering(466) 00:20:10.980 fused_ordering(467) 00:20:10.980 fused_ordering(468) 00:20:10.980 fused_ordering(469) 00:20:10.980 fused_ordering(470) 00:20:10.980 fused_ordering(471) 00:20:10.980 fused_ordering(472) 00:20:10.980 fused_ordering(473) 00:20:10.980 fused_ordering(474) 00:20:10.980 fused_ordering(475) 00:20:10.980 fused_ordering(476) 00:20:10.980 fused_ordering(477) 00:20:10.980 fused_ordering(478) 00:20:10.980 fused_ordering(479) 00:20:10.980 fused_ordering(480) 00:20:10.980 fused_ordering(481) 00:20:10.980 fused_ordering(482) 00:20:10.980 fused_ordering(483) 00:20:10.980 fused_ordering(484) 00:20:10.980 fused_ordering(485) 00:20:10.980 fused_ordering(486) 00:20:10.980 fused_ordering(487) 00:20:10.980 fused_ordering(488) 00:20:10.980 fused_ordering(489) 00:20:10.980 fused_ordering(490) 00:20:10.980 fused_ordering(491) 00:20:10.980 fused_ordering(492) 00:20:10.980 fused_ordering(493) 00:20:10.980 fused_ordering(494) 00:20:10.980 fused_ordering(495) 00:20:10.980 fused_ordering(496) 00:20:10.980 fused_ordering(497) 00:20:10.980 fused_ordering(498) 00:20:10.980 fused_ordering(499) 00:20:10.980 fused_ordering(500) 00:20:10.980 fused_ordering(501) 00:20:10.980 fused_ordering(502) 00:20:10.980 fused_ordering(503) 00:20:10.980 fused_ordering(504) 00:20:10.980 fused_ordering(505) 00:20:10.980 fused_ordering(506) 00:20:10.980 fused_ordering(507) 00:20:10.980 fused_ordering(508) 00:20:10.980 fused_ordering(509) 00:20:10.980 fused_ordering(510) 00:20:10.980 fused_ordering(511) 00:20:10.980 fused_ordering(512) 00:20:10.980 fused_ordering(513) 00:20:10.980 fused_ordering(514) 00:20:10.980 fused_ordering(515) 00:20:10.980 fused_ordering(516) 00:20:10.980 fused_ordering(517) 00:20:10.980 fused_ordering(518) 00:20:10.980 fused_ordering(519) 00:20:10.980 fused_ordering(520) 00:20:10.980 fused_ordering(521) 00:20:10.980 fused_ordering(522) 00:20:10.980 fused_ordering(523) 00:20:10.980 fused_ordering(524) 00:20:10.980 fused_ordering(525) 00:20:10.980 fused_ordering(526) 00:20:10.980 fused_ordering(527) 00:20:10.980 fused_ordering(528) 00:20:10.980 fused_ordering(529) 00:20:10.980 fused_ordering(530) 00:20:10.980 fused_ordering(531) 00:20:10.980 fused_ordering(532) 00:20:10.980 fused_ordering(533) 00:20:10.980 fused_ordering(534) 00:20:10.980 fused_ordering(535) 00:20:10.980 fused_ordering(536) 00:20:10.980 fused_ordering(537) 00:20:10.980 fused_ordering(538) 00:20:10.980 fused_ordering(539) 00:20:10.980 fused_ordering(540) 00:20:10.980 fused_ordering(541) 00:20:10.980 fused_ordering(542) 00:20:10.980 fused_ordering(543) 00:20:10.980 fused_ordering(544) 00:20:10.980 fused_ordering(545) 00:20:10.980 fused_ordering(546) 00:20:10.980 fused_ordering(547) 00:20:10.980 fused_ordering(548) 00:20:10.980 fused_ordering(549) 00:20:10.980 fused_ordering(550) 00:20:10.980 fused_ordering(551) 00:20:10.981 fused_ordering(552) 00:20:10.981 fused_ordering(553) 00:20:10.981 fused_ordering(554) 00:20:10.981 fused_ordering(555) 00:20:10.981 fused_ordering(556) 00:20:10.981 fused_ordering(557) 00:20:10.981 fused_ordering(558) 00:20:10.981 fused_ordering(559) 00:20:10.981 fused_ordering(560) 00:20:10.981 fused_ordering(561) 00:20:10.981 fused_ordering(562) 00:20:10.981 fused_ordering(563) 00:20:10.981 fused_ordering(564) 00:20:10.981 fused_ordering(565) 00:20:10.981 fused_ordering(566) 00:20:10.981 fused_ordering(567) 00:20:10.981 fused_ordering(568) 00:20:10.981 fused_ordering(569) 00:20:10.981 fused_ordering(570) 00:20:10.981 fused_ordering(571) 00:20:10.981 fused_ordering(572) 00:20:10.981 fused_ordering(573) 00:20:10.981 fused_ordering(574) 00:20:10.981 fused_ordering(575) 00:20:10.981 fused_ordering(576) 00:20:10.981 fused_ordering(577) 00:20:10.981 fused_ordering(578) 00:20:10.981 fused_ordering(579) 00:20:10.981 fused_ordering(580) 00:20:10.981 fused_ordering(581) 00:20:10.981 fused_ordering(582) 00:20:10.981 fused_ordering(583) 00:20:10.981 fused_ordering(584) 00:20:10.981 fused_ordering(585) 00:20:10.981 fused_ordering(586) 00:20:10.981 fused_ordering(587) 00:20:10.981 fused_ordering(588) 00:20:10.981 fused_ordering(589) 00:20:10.981 fused_ordering(590) 00:20:10.981 fused_ordering(591) 00:20:10.981 fused_ordering(592) 00:20:10.981 fused_ordering(593) 00:20:10.981 fused_ordering(594) 00:20:10.981 fused_ordering(595) 00:20:10.981 fused_ordering(596) 00:20:10.981 fused_ordering(597) 00:20:10.981 fused_ordering(598) 00:20:10.981 fused_ordering(599) 00:20:10.981 fused_ordering(600) 00:20:10.981 fused_ordering(601) 00:20:10.981 fused_ordering(602) 00:20:10.981 fused_ordering(603) 00:20:10.981 fused_ordering(604) 00:20:10.981 fused_ordering(605) 00:20:10.981 fused_ordering(606) 00:20:10.981 fused_ordering(607) 00:20:10.981 fused_ordering(608) 00:20:10.981 fused_ordering(609) 00:20:10.981 fused_ordering(610) 00:20:10.981 fused_ordering(611) 00:20:10.981 fused_ordering(612) 00:20:10.981 fused_ordering(613) 00:20:10.981 fused_ordering(614) 00:20:10.981 fused_ordering(615) 00:20:11.553 fused_ordering(616) 00:20:11.553 fused_ordering(617) 00:20:11.553 fused_ordering(618) 00:20:11.553 fused_ordering(619) 00:20:11.553 fused_ordering(620) 00:20:11.553 fused_ordering(621) 00:20:11.553 fused_ordering(622) 00:20:11.553 fused_ordering(623) 00:20:11.553 fused_ordering(624) 00:20:11.553 fused_ordering(625) 00:20:11.553 fused_ordering(626) 00:20:11.553 fused_ordering(627) 00:20:11.553 fused_ordering(628) 00:20:11.553 fused_ordering(629) 00:20:11.553 fused_ordering(630) 00:20:11.553 fused_ordering(631) 00:20:11.553 fused_ordering(632) 00:20:11.553 fused_ordering(633) 00:20:11.553 fused_ordering(634) 00:20:11.553 fused_ordering(635) 00:20:11.553 fused_ordering(636) 00:20:11.553 fused_ordering(637) 00:20:11.553 fused_ordering(638) 00:20:11.553 fused_ordering(639) 00:20:11.553 fused_ordering(640) 00:20:11.553 fused_ordering(641) 00:20:11.553 fused_ordering(642) 00:20:11.553 fused_ordering(643) 00:20:11.553 fused_ordering(644) 00:20:11.553 fused_ordering(645) 00:20:11.553 fused_ordering(646) 00:20:11.553 fused_ordering(647) 00:20:11.553 fused_ordering(648) 00:20:11.553 fused_ordering(649) 00:20:11.553 fused_ordering(650) 00:20:11.553 fused_ordering(651) 00:20:11.553 fused_ordering(652) 00:20:11.553 fused_ordering(653) 00:20:11.553 fused_ordering(654) 00:20:11.553 fused_ordering(655) 00:20:11.553 fused_ordering(656) 00:20:11.553 fused_ordering(657) 00:20:11.553 fused_ordering(658) 00:20:11.553 fused_ordering(659) 00:20:11.553 fused_ordering(660) 00:20:11.553 fused_ordering(661) 00:20:11.553 fused_ordering(662) 00:20:11.553 fused_ordering(663) 00:20:11.553 fused_ordering(664) 00:20:11.553 fused_ordering(665) 00:20:11.553 fused_ordering(666) 00:20:11.553 fused_ordering(667) 00:20:11.553 fused_ordering(668) 00:20:11.553 fused_ordering(669) 00:20:11.553 fused_ordering(670) 00:20:11.553 fused_ordering(671) 00:20:11.553 fused_ordering(672) 00:20:11.553 fused_ordering(673) 00:20:11.553 fused_ordering(674) 00:20:11.553 fused_ordering(675) 00:20:11.553 fused_ordering(676) 00:20:11.553 fused_ordering(677) 00:20:11.553 fused_ordering(678) 00:20:11.553 fused_ordering(679) 00:20:11.553 fused_ordering(680) 00:20:11.553 fused_ordering(681) 00:20:11.553 fused_ordering(682) 00:20:11.553 fused_ordering(683) 00:20:11.553 fused_ordering(684) 00:20:11.553 fused_ordering(685) 00:20:11.553 fused_ordering(686) 00:20:11.553 fused_ordering(687) 00:20:11.553 fused_ordering(688) 00:20:11.553 fused_ordering(689) 00:20:11.553 fused_ordering(690) 00:20:11.553 fused_ordering(691) 00:20:11.553 fused_ordering(692) 00:20:11.553 fused_ordering(693) 00:20:11.553 fused_ordering(694) 00:20:11.553 fused_ordering(695) 00:20:11.553 fused_ordering(696) 00:20:11.553 fused_ordering(697) 00:20:11.553 fused_ordering(698) 00:20:11.553 fused_ordering(699) 00:20:11.553 fused_ordering(700) 00:20:11.553 fused_ordering(701) 00:20:11.553 fused_ordering(702) 00:20:11.553 fused_ordering(703) 00:20:11.553 fused_ordering(704) 00:20:11.553 fused_ordering(705) 00:20:11.553 fused_ordering(706) 00:20:11.553 fused_ordering(707) 00:20:11.553 fused_ordering(708) 00:20:11.553 fused_ordering(709) 00:20:11.553 fused_ordering(710) 00:20:11.553 fused_ordering(711) 00:20:11.553 fused_ordering(712) 00:20:11.553 fused_ordering(713) 00:20:11.553 fused_ordering(714) 00:20:11.553 fused_ordering(715) 00:20:11.553 fused_ordering(716) 00:20:11.553 fused_ordering(717) 00:20:11.553 fused_ordering(718) 00:20:11.553 fused_ordering(719) 00:20:11.553 fused_ordering(720) 00:20:11.553 fused_ordering(721) 00:20:11.553 fused_ordering(722) 00:20:11.553 fused_ordering(723) 00:20:11.553 fused_ordering(724) 00:20:11.553 fused_ordering(725) 00:20:11.553 fused_ordering(726) 00:20:11.553 fused_ordering(727) 00:20:11.553 fused_ordering(728) 00:20:11.553 fused_ordering(729) 00:20:11.553 fused_ordering(730) 00:20:11.553 fused_ordering(731) 00:20:11.553 fused_ordering(732) 00:20:11.553 fused_ordering(733) 00:20:11.553 fused_ordering(734) 00:20:11.553 fused_ordering(735) 00:20:11.553 fused_ordering(736) 00:20:11.553 fused_ordering(737) 00:20:11.553 fused_ordering(738) 00:20:11.553 fused_ordering(739) 00:20:11.553 fused_ordering(740) 00:20:11.553 fused_ordering(741) 00:20:11.553 fused_ordering(742) 00:20:11.553 fused_ordering(743) 00:20:11.553 fused_ordering(744) 00:20:11.553 fused_ordering(745) 00:20:11.553 fused_ordering(746) 00:20:11.553 fused_ordering(747) 00:20:11.553 fused_ordering(748) 00:20:11.553 fused_ordering(749) 00:20:11.553 fused_ordering(750) 00:20:11.553 fused_ordering(751) 00:20:11.553 fused_ordering(752) 00:20:11.553 fused_ordering(753) 00:20:11.553 fused_ordering(754) 00:20:11.553 fused_ordering(755) 00:20:11.553 fused_ordering(756) 00:20:11.553 fused_ordering(757) 00:20:11.553 fused_ordering(758) 00:20:11.553 fused_ordering(759) 00:20:11.553 fused_ordering(760) 00:20:11.553 fused_ordering(761) 00:20:11.553 fused_ordering(762) 00:20:11.553 fused_ordering(763) 00:20:11.553 fused_ordering(764) 00:20:11.553 fused_ordering(765) 00:20:11.553 fused_ordering(766) 00:20:11.553 fused_ordering(767) 00:20:11.553 fused_ordering(768) 00:20:11.553 fused_ordering(769) 00:20:11.553 fused_ordering(770) 00:20:11.553 fused_ordering(771) 00:20:11.553 fused_ordering(772) 00:20:11.553 fused_ordering(773) 00:20:11.553 fused_ordering(774) 00:20:11.553 fused_ordering(775) 00:20:11.553 fused_ordering(776) 00:20:11.553 fused_ordering(777) 00:20:11.553 fused_ordering(778) 00:20:11.553 fused_ordering(779) 00:20:11.553 fused_ordering(780) 00:20:11.553 fused_ordering(781) 00:20:11.553 fused_ordering(782) 00:20:11.553 fused_ordering(783) 00:20:11.553 fused_ordering(784) 00:20:11.553 fused_ordering(785) 00:20:11.553 fused_ordering(786) 00:20:11.553 fused_ordering(787) 00:20:11.553 fused_ordering(788) 00:20:11.553 fused_ordering(789) 00:20:11.553 fused_ordering(790) 00:20:11.553 fused_ordering(791) 00:20:11.553 fused_ordering(792) 00:20:11.553 fused_ordering(793) 00:20:11.554 fused_ordering(794) 00:20:11.554 fused_ordering(795) 00:20:11.554 fused_ordering(796) 00:20:11.554 fused_ordering(797) 00:20:11.554 fused_ordering(798) 00:20:11.554 fused_ordering(799) 00:20:11.554 fused_ordering(800) 00:20:11.554 fused_ordering(801) 00:20:11.554 fused_ordering(802) 00:20:11.554 fused_ordering(803) 00:20:11.554 fused_ordering(804) 00:20:11.554 fused_ordering(805) 00:20:11.554 fused_ordering(806) 00:20:11.554 fused_ordering(807) 00:20:11.554 fused_ordering(808) 00:20:11.554 fused_ordering(809) 00:20:11.554 fused_ordering(810) 00:20:11.554 fused_ordering(811) 00:20:11.554 fused_ordering(812) 00:20:11.554 fused_ordering(813) 00:20:11.554 fused_ordering(814) 00:20:11.554 fused_ordering(815) 00:20:11.554 fused_ordering(816) 00:20:11.554 fused_ordering(817) 00:20:11.554 fused_ordering(818) 00:20:11.554 fused_ordering(819) 00:20:11.554 fused_ordering(820) 00:20:12.124 fused_ordering(821) 00:20:12.124 fused_ordering(822) 00:20:12.124 fused_ordering(823) 00:20:12.124 fused_ordering(824) 00:20:12.124 fused_ordering(825) 00:20:12.124 fused_ordering(826) 00:20:12.124 fused_ordering(827) 00:20:12.124 fused_ordering(828) 00:20:12.124 fused_ordering(829) 00:20:12.124 fused_ordering(830) 00:20:12.124 fused_ordering(831) 00:20:12.124 fused_ordering(832) 00:20:12.124 fused_ordering(833) 00:20:12.124 fused_ordering(834) 00:20:12.124 fused_ordering(835) 00:20:12.124 fused_ordering(836) 00:20:12.124 fused_ordering(837) 00:20:12.124 fused_ordering(838) 00:20:12.124 fused_ordering(839) 00:20:12.124 fused_ordering(840) 00:20:12.124 fused_ordering(841) 00:20:12.124 fused_ordering(842) 00:20:12.124 fused_ordering(843) 00:20:12.124 fused_ordering(844) 00:20:12.124 fused_ordering(845) 00:20:12.124 fused_ordering(846) 00:20:12.124 fused_ordering(847) 00:20:12.124 fused_ordering(848) 00:20:12.124 fused_ordering(849) 00:20:12.124 fused_ordering(850) 00:20:12.124 fused_ordering(851) 00:20:12.124 fused_ordering(852) 00:20:12.124 fused_ordering(853) 00:20:12.124 fused_ordering(854) 00:20:12.124 fused_ordering(855) 00:20:12.124 fused_ordering(856) 00:20:12.124 fused_ordering(857) 00:20:12.124 fused_ordering(858) 00:20:12.124 fused_ordering(859) 00:20:12.124 fused_ordering(860) 00:20:12.124 fused_ordering(861) 00:20:12.124 fused_ordering(862) 00:20:12.124 fused_ordering(863) 00:20:12.124 fused_ordering(864) 00:20:12.124 fused_ordering(865) 00:20:12.124 fused_ordering(866) 00:20:12.124 fused_ordering(867) 00:20:12.124 fused_ordering(868) 00:20:12.124 fused_ordering(869) 00:20:12.124 fused_ordering(870) 00:20:12.124 fused_ordering(871) 00:20:12.124 fused_ordering(872) 00:20:12.124 fused_ordering(873) 00:20:12.124 fused_ordering(874) 00:20:12.124 fused_ordering(875) 00:20:12.124 fused_ordering(876) 00:20:12.124 fused_ordering(877) 00:20:12.124 fused_ordering(878) 00:20:12.124 fused_ordering(879) 00:20:12.124 fused_ordering(880) 00:20:12.124 fused_ordering(881) 00:20:12.124 fused_ordering(882) 00:20:12.124 fused_ordering(883) 00:20:12.124 fused_ordering(884) 00:20:12.124 fused_ordering(885) 00:20:12.124 fused_ordering(886) 00:20:12.124 fused_ordering(887) 00:20:12.124 fused_ordering(888) 00:20:12.124 fused_ordering(889) 00:20:12.124 fused_ordering(890) 00:20:12.124 fused_ordering(891) 00:20:12.124 fused_ordering(892) 00:20:12.124 fused_ordering(893) 00:20:12.124 fused_ordering(894) 00:20:12.124 fused_ordering(895) 00:20:12.124 fused_ordering(896) 00:20:12.124 fused_ordering(897) 00:20:12.124 fused_ordering(898) 00:20:12.124 fused_ordering(899) 00:20:12.124 fused_ordering(900) 00:20:12.124 fused_ordering(901) 00:20:12.124 fused_ordering(902) 00:20:12.124 fused_ordering(903) 00:20:12.124 fused_ordering(904) 00:20:12.124 fused_ordering(905) 00:20:12.124 fused_ordering(906) 00:20:12.124 fused_ordering(907) 00:20:12.124 fused_ordering(908) 00:20:12.125 fused_ordering(909) 00:20:12.125 fused_ordering(910) 00:20:12.125 fused_ordering(911) 00:20:12.125 fused_ordering(912) 00:20:12.125 fused_ordering(913) 00:20:12.125 fused_ordering(914) 00:20:12.125 fused_ordering(915) 00:20:12.125 fused_ordering(916) 00:20:12.125 fused_ordering(917) 00:20:12.125 fused_ordering(918) 00:20:12.125 fused_ordering(919) 00:20:12.125 fused_ordering(920) 00:20:12.125 fused_ordering(921) 00:20:12.125 fused_ordering(922) 00:20:12.125 fused_ordering(923) 00:20:12.125 fused_ordering(924) 00:20:12.125 fused_ordering(925) 00:20:12.125 fused_ordering(926) 00:20:12.125 fused_ordering(927) 00:20:12.125 fused_ordering(928) 00:20:12.125 fused_ordering(929) 00:20:12.125 fused_ordering(930) 00:20:12.125 fused_ordering(931) 00:20:12.125 fused_ordering(932) 00:20:12.125 fused_ordering(933) 00:20:12.125 fused_ordering(934) 00:20:12.125 fused_ordering(935) 00:20:12.125 fused_ordering(936) 00:20:12.125 fused_ordering(937) 00:20:12.125 fused_ordering(938) 00:20:12.125 fused_ordering(939) 00:20:12.125 fused_ordering(940) 00:20:12.125 fused_ordering(941) 00:20:12.125 fused_ordering(942) 00:20:12.125 fused_ordering(943) 00:20:12.125 fused_ordering(944) 00:20:12.125 fused_ordering(945) 00:20:12.125 fused_ordering(946) 00:20:12.125 fused_ordering(947) 00:20:12.125 fused_ordering(948) 00:20:12.125 fused_ordering(949) 00:20:12.125 fused_ordering(950) 00:20:12.125 fused_ordering(951) 00:20:12.125 fused_ordering(952) 00:20:12.125 fused_ordering(953) 00:20:12.125 fused_ordering(954) 00:20:12.125 fused_ordering(955) 00:20:12.125 fused_ordering(956) 00:20:12.125 fused_ordering(957) 00:20:12.125 fused_ordering(958) 00:20:12.125 fused_ordering(959) 00:20:12.125 fused_ordering(960) 00:20:12.125 fused_ordering(961) 00:20:12.125 fused_ordering(962) 00:20:12.125 fused_ordering(963) 00:20:12.125 fused_ordering(964) 00:20:12.125 fused_ordering(965) 00:20:12.125 fused_ordering(966) 00:20:12.125 fused_ordering(967) 00:20:12.125 fused_ordering(968) 00:20:12.125 fused_ordering(969) 00:20:12.125 fused_ordering(970) 00:20:12.125 fused_ordering(971) 00:20:12.125 fused_ordering(972) 00:20:12.125 fused_ordering(973) 00:20:12.125 fused_ordering(974) 00:20:12.125 fused_ordering(975) 00:20:12.125 fused_ordering(976) 00:20:12.125 fused_ordering(977) 00:20:12.125 fused_ordering(978) 00:20:12.125 fused_ordering(979) 00:20:12.125 fused_ordering(980) 00:20:12.125 fused_ordering(981) 00:20:12.125 fused_ordering(982) 00:20:12.125 fused_ordering(983) 00:20:12.125 fused_ordering(984) 00:20:12.125 fused_ordering(985) 00:20:12.125 fused_ordering(986) 00:20:12.125 fused_ordering(987) 00:20:12.125 fused_ordering(988) 00:20:12.125 fused_ordering(989) 00:20:12.125 fused_ordering(990) 00:20:12.125 fused_ordering(991) 00:20:12.125 fused_ordering(992) 00:20:12.125 fused_ordering(993) 00:20:12.125 fused_ordering(994) 00:20:12.125 fused_ordering(995) 00:20:12.125 fused_ordering(996) 00:20:12.125 fused_ordering(997) 00:20:12.125 fused_ordering(998) 00:20:12.125 fused_ordering(999) 00:20:12.125 fused_ordering(1000) 00:20:12.125 fused_ordering(1001) 00:20:12.125 fused_ordering(1002) 00:20:12.125 fused_ordering(1003) 00:20:12.125 fused_ordering(1004) 00:20:12.125 fused_ordering(1005) 00:20:12.125 fused_ordering(1006) 00:20:12.125 fused_ordering(1007) 00:20:12.125 fused_ordering(1008) 00:20:12.125 fused_ordering(1009) 00:20:12.125 fused_ordering(1010) 00:20:12.125 fused_ordering(1011) 00:20:12.125 fused_ordering(1012) 00:20:12.125 fused_ordering(1013) 00:20:12.125 fused_ordering(1014) 00:20:12.125 fused_ordering(1015) 00:20:12.125 fused_ordering(1016) 00:20:12.125 fused_ordering(1017) 00:20:12.125 fused_ordering(1018) 00:20:12.125 fused_ordering(1019) 00:20:12.125 fused_ordering(1020) 00:20:12.125 fused_ordering(1021) 00:20:12.125 fused_ordering(1022) 00:20:12.125 fused_ordering(1023) 00:20:12.125 17:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:20:12.125 17:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:20:12.125 17:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:12.125 17:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:20:12.125 17:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:12.125 17:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:20:12.125 17:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:12.125 17:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:12.125 rmmod nvme_tcp 00:20:12.125 rmmod nvme_fabrics 00:20:12.125 rmmod nvme_keyring 00:20:12.125 17:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:12.125 17:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:20:12.125 17:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:20:12.125 17:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 2639845 ']' 00:20:12.125 17:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 2639845 00:20:12.125 17:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2639845 ']' 00:20:12.125 17:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2639845 00:20:12.125 17:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:20:12.125 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:12.125 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2639845 00:20:12.386 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:12.386 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:12.386 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2639845' 00:20:12.386 killing process with pid 2639845 00:20:12.386 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2639845 00:20:12.386 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2639845 00:20:12.386 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:12.386 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:12.386 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:12.386 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:20:12.386 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:20:12.386 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:12.386 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:20:12.386 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:12.386 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:12.386 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.386 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.386 17:47:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:14.931 00:20:14.931 real 0m13.342s 00:20:14.931 user 0m7.048s 00:20:14.931 sys 0m7.073s 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:14.931 ************************************ 00:20:14.931 END TEST nvmf_fused_ordering 00:20:14.931 ************************************ 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:14.931 ************************************ 00:20:14.931 START TEST nvmf_ns_masking 00:20:14.931 ************************************ 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:20:14.931 * Looking for test storage... 00:20:14.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:14.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.931 --rc genhtml_branch_coverage=1 00:20:14.931 --rc genhtml_function_coverage=1 00:20:14.931 --rc genhtml_legend=1 00:20:14.931 --rc geninfo_all_blocks=1 00:20:14.931 --rc geninfo_unexecuted_blocks=1 00:20:14.931 00:20:14.931 ' 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:14.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.931 --rc genhtml_branch_coverage=1 00:20:14.931 --rc genhtml_function_coverage=1 00:20:14.931 --rc genhtml_legend=1 00:20:14.931 --rc geninfo_all_blocks=1 00:20:14.931 --rc geninfo_unexecuted_blocks=1 00:20:14.931 00:20:14.931 ' 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:14.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.931 --rc genhtml_branch_coverage=1 00:20:14.931 --rc genhtml_function_coverage=1 00:20:14.931 --rc genhtml_legend=1 00:20:14.931 --rc geninfo_all_blocks=1 00:20:14.931 --rc geninfo_unexecuted_blocks=1 00:20:14.931 00:20:14.931 ' 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:14.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.931 --rc genhtml_branch_coverage=1 00:20:14.931 --rc genhtml_function_coverage=1 00:20:14.931 --rc genhtml_legend=1 00:20:14.931 --rc geninfo_all_blocks=1 00:20:14.931 --rc geninfo_unexecuted_blocks=1 00:20:14.931 00:20:14.931 ' 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:20:14.931 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:14.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=fd1f809e-df64-4452-bf5b-08bb0cfe61f7 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=65533b6a-cb5d-48bd-8bd2-813e22de5546 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a5f6508a-2846-42f1-9130-f416ec134550 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:20:14.932 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:23.075 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:23.075 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:23.075 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.075 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:23.076 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # is_hw=yes 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:23.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:20:23.076 00:20:23.076 --- 10.0.0.2 ping statistics --- 00:20:23.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.076 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:20:23.076 17:47:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:23.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:20:23.076 00:20:23.076 --- 10.0.0.1 ping statistics --- 00:20:23.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.076 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # return 0 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=2644787 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 2644787 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2644787 ']' 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:23.076 [2024-11-20 17:47:22.113605] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:23.076 [2024-11-20 17:47:22.113674] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.076 [2024-11-20 17:47:22.203695] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.076 [2024-11-20 17:47:22.250509] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.076 [2024-11-20 17:47:22.250560] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.076 [2024-11-20 17:47:22.250569] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.076 [2024-11-20 17:47:22.250576] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.076 [2024-11-20 17:47:22.250582] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.076 [2024-11-20 17:47:22.250610] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.076 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:23.337 [2024-11-20 17:47:23.136641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.337 17:47:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:20:23.337 17:47:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:20:23.337 17:47:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:23.598 Malloc1 00:20:23.598 17:47:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:23.859 Malloc2 00:20:23.859 17:47:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:24.121 17:47:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:20:24.121 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:24.381 [2024-11-20 17:47:24.170014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.381 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:20:24.381 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a5f6508a-2846-42f1-9130-f416ec134550 -a 10.0.0.2 -s 4420 -i 4 00:20:24.642 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:20:24.642 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:20:24.642 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:24.642 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:24.642 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:20:26.636 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:26.636 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:26.636 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:26.636 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:26.636 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:26.636 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:20:26.636 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:26.636 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:26.636 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:26.636 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:26.636 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:20:26.636 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:26.636 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:26.636 [ 0]:0x1 00:20:26.897 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:26.897 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:26.897 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d8d648d5b56d4903850f5490d7af69cd 00:20:26.897 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d8d648d5b56d4903850f5490d7af69cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:26.897 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:20:26.897 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:20:26.897 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:26.897 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:26.897 [ 0]:0x1 00:20:26.897 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:26.897 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:27.158 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d8d648d5b56d4903850f5490d7af69cd 00:20:27.158 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d8d648d5b56d4903850f5490d7af69cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:27.158 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:20:27.158 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:27.158 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:27.158 [ 1]:0x2 00:20:27.158 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:27.158 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:27.158 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c4d7386eeb4a03baf952959c0d6ba8 00:20:27.158 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c4d7386eeb4a03baf952959c0d6ba8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:27.158 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:20:27.158 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:27.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:27.158 17:47:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:27.418 17:47:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:20:27.684 17:47:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:20:27.684 17:47:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a5f6508a-2846-42f1-9130-f416ec134550 -a 10.0.0.2 -s 4420 -i 4 00:20:27.684 17:47:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:20:27.684 17:47:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:20:27.684 17:47:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:27.684 17:47:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:20:27.684 17:47:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:20:27.684 17:47:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:30.231 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:30.232 [ 0]:0x2 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c4d7386eeb4a03baf952959c0d6ba8 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c4d7386eeb4a03baf952959c0d6ba8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:30.232 [ 0]:0x1 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d8d648d5b56d4903850f5490d7af69cd 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d8d648d5b56d4903850f5490d7af69cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:30.232 [ 1]:0x2 00:20:30.232 17:47:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:30.232 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:30.232 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c4d7386eeb4a03baf952959c0d6ba8 00:20:30.232 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c4d7386eeb4a03baf952959c0d6ba8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:30.232 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:30.493 [ 0]:0x2 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c4d7386eeb4a03baf952959c0d6ba8 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c4d7386eeb4a03baf952959c0d6ba8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:20:30.493 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:30.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:30.754 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:30.754 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:20:30.754 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a5f6508a-2846-42f1-9130-f416ec134550 -a 10.0.0.2 -s 4420 -i 4 00:20:31.014 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:31.014 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:20:31.014 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:31.014 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:20:31.014 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:20:31.014 17:47:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:20:32.927 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:32.927 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:32.927 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:32.927 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:20:32.927 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:32.927 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:20:32.927 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:32.927 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:33.199 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:33.199 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:33.199 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:20:33.199 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:33.199 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:33.199 [ 0]:0x1 00:20:33.199 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:33.199 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:33.199 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d8d648d5b56d4903850f5490d7af69cd 00:20:33.199 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d8d648d5b56d4903850f5490d7af69cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:33.199 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:20:33.199 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:33.199 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:33.199 [ 1]:0x2 00:20:33.199 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:33.199 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:33.199 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c4d7386eeb4a03baf952959c0d6ba8 00:20:33.199 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c4d7386eeb4a03baf952959c0d6ba8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:33.199 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:33.519 [ 0]:0x2 00:20:33.519 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:33.520 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:33.520 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c4d7386eeb4a03baf952959c0d6ba8 00:20:33.520 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c4d7386eeb4a03baf952959c0d6ba8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:33.520 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:33.520 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:20:33.520 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:33.520 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:33.520 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:33.520 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:33.520 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:33.520 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:33.520 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:33.520 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:33.520 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:20:33.520 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:33.835 [2024-11-20 17:47:33.524062] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:20:33.835 request: 00:20:33.835 { 00:20:33.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.835 "nsid": 2, 00:20:33.835 "host": "nqn.2016-06.io.spdk:host1", 00:20:33.835 "method": "nvmf_ns_remove_host", 00:20:33.835 "req_id": 1 00:20:33.835 } 00:20:33.835 Got JSON-RPC error response 00:20:33.835 response: 00:20:33.835 { 00:20:33.835 "code": -32602, 00:20:33.835 "message": "Invalid parameters" 00:20:33.835 } 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:33.835 [ 0]:0x2 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0c4d7386eeb4a03baf952959c0d6ba8 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0c4d7386eeb4a03baf952959c0d6ba8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:20:33.835 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:34.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:34.098 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2647036 00:20:34.098 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:20:34.098 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.098 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2647036 /var/tmp/host.sock 00:20:34.098 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2647036 ']' 00:20:34.098 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:20:34.098 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:34.098 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:34.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:34.098 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:34.098 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:34.098 [2024-11-20 17:47:33.868991] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:34.098 [2024-11-20 17:47:33.869042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2647036 ] 00:20:34.098 [2024-11-20 17:47:33.947133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.098 [2024-11-20 17:47:33.978147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.039 17:47:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:35.039 17:47:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:20:35.039 17:47:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:35.039 17:47:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:35.300 17:47:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid fd1f809e-df64-4452-bf5b-08bb0cfe61f7 00:20:35.300 17:47:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:20:35.300 17:47:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FD1F809EDF644452BF5B08BB0CFE61F7 -i 00:20:35.560 17:47:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 65533b6a-cb5d-48bd-8bd2-813e22de5546 00:20:35.560 17:47:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:20:35.560 17:47:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 65533B6ACB5D48BD8BD2813E22DE5546 -i 00:20:35.560 17:47:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:35.821 17:47:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:20:36.082 17:47:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:20:36.082 17:47:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:20:36.343 nvme0n1 00:20:36.343 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:20:36.343 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:20:36.604 nvme1n2 00:20:36.604 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:20:36.604 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:20:36.604 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:20:36.604 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:20:36.604 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:20:36.865 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:20:36.865 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:20:36.865 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:20:36.865 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:20:37.127 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ fd1f809e-df64-4452-bf5b-08bb0cfe61f7 == \f\d\1\f\8\0\9\e\-\d\f\6\4\-\4\4\5\2\-\b\f\5\b\-\0\8\b\b\0\c\f\e\6\1\f\7 ]] 00:20:37.127 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:20:37.127 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:20:37.127 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:20:37.127 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 65533b6a-cb5d-48bd-8bd2-813e22de5546 == \6\5\5\3\3\b\6\a\-\c\b\5\d\-\4\8\b\d\-\8\b\d\2\-\8\1\3\e\2\2\d\e\5\5\4\6 ]] 00:20:37.127 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2647036 00:20:37.127 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2647036 ']' 00:20:37.127 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2647036 00:20:37.127 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:20:37.127 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:37.127 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2647036 00:20:37.388 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:37.388 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:37.388 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2647036' 00:20:37.388 killing process with pid 2647036 00:20:37.388 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2647036 00:20:37.388 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2647036 00:20:37.649 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:37.649 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:20:37.650 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:20:37.650 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:37.650 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:20:37.650 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:37.650 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:20:37.650 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:37.650 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:37.650 rmmod nvme_tcp 00:20:37.650 rmmod nvme_fabrics 00:20:37.650 rmmod nvme_keyring 00:20:37.650 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:37.650 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:20:37.650 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:20:37.650 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 2644787 ']' 00:20:37.650 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 2644787 00:20:37.650 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2644787 ']' 00:20:37.650 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2644787 00:20:37.650 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:20:37.650 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:37.650 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2644787 00:20:37.910 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:37.910 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:37.910 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2644787' 00:20:37.910 killing process with pid 2644787 00:20:37.910 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2644787 00:20:37.910 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2644787 00:20:37.910 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:37.910 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:37.910 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:37.910 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:20:37.910 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:20:37.910 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:37.910 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:20:37.910 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:37.910 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:37.910 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.910 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.910 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.456 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:40.456 00:20:40.456 real 0m25.530s 00:20:40.456 user 0m25.989s 00:20:40.456 sys 0m8.002s 00:20:40.456 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:40.456 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:40.456 ************************************ 00:20:40.456 END TEST nvmf_ns_masking 00:20:40.456 ************************************ 00:20:40.456 17:47:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:20:40.456 17:47:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:20:40.456 17:47:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:40.456 17:47:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:40.456 17:47:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:40.456 ************************************ 00:20:40.456 START TEST nvmf_nvme_cli 00:20:40.456 ************************************ 00:20:40.456 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:20:40.456 * Looking for test storage... 00:20:40.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:40.456 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:40.456 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:20:40.456 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:40.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.456 --rc genhtml_branch_coverage=1 00:20:40.456 --rc genhtml_function_coverage=1 00:20:40.456 --rc genhtml_legend=1 00:20:40.456 --rc geninfo_all_blocks=1 00:20:40.456 --rc geninfo_unexecuted_blocks=1 00:20:40.456 00:20:40.456 ' 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:40.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.456 --rc genhtml_branch_coverage=1 00:20:40.456 --rc genhtml_function_coverage=1 00:20:40.456 --rc genhtml_legend=1 00:20:40.456 --rc geninfo_all_blocks=1 00:20:40.456 --rc geninfo_unexecuted_blocks=1 00:20:40.456 00:20:40.456 ' 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:40.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.456 --rc genhtml_branch_coverage=1 00:20:40.456 --rc genhtml_function_coverage=1 00:20:40.456 --rc genhtml_legend=1 00:20:40.456 --rc geninfo_all_blocks=1 00:20:40.456 --rc geninfo_unexecuted_blocks=1 00:20:40.456 00:20:40.456 ' 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:40.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.456 --rc genhtml_branch_coverage=1 00:20:40.456 --rc genhtml_function_coverage=1 00:20:40.456 --rc genhtml_legend=1 00:20:40.456 --rc geninfo_all_blocks=1 00:20:40.456 --rc geninfo_unexecuted_blocks=1 00:20:40.456 00:20:40.456 ' 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.456 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:40.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:20:40.457 17:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:48.601 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.601 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:48.602 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:48.602 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:48.602 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # is_hw=yes 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:48.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:20:48.602 00:20:48.602 --- 10.0.0.2 ping statistics --- 00:20:48.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.602 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:48.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:20:48.602 00:20:48.602 --- 10.0.0.1 ping statistics --- 00:20:48.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.602 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # return 0 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # nvmfpid=2651976 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # waitforlisten 2651976 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2651976 ']' 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:48.602 17:47:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:48.602 [2024-11-20 17:47:47.652992] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:48.602 [2024-11-20 17:47:47.653061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.602 [2024-11-20 17:47:47.740545] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:48.602 [2024-11-20 17:47:47.790983] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.602 [2024-11-20 17:47:47.791039] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.602 [2024-11-20 17:47:47.791047] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.602 [2024-11-20 17:47:47.791055] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.602 [2024-11-20 17:47:47.791061] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.602 [2024-11-20 17:47:47.791252] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.602 [2024-11-20 17:47:47.791462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.602 [2024-11-20 17:47:47.791616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.602 [2024-11-20 17:47:47.791617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:20:48.602 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:48.602 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:20:48.602 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:48.602 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:48.603 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:48.864 [2024-11-20 17:47:48.537165] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:48.864 Malloc0 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:48.864 Malloc1 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:48.864 [2024-11-20 17:47:48.638890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.864 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:20:49.126 00:20:49.126 Discovery Log Number of Records 2, Generation counter 2 00:20:49.126 =====Discovery Log Entry 0====== 00:20:49.126 trtype: tcp 00:20:49.126 adrfam: ipv4 00:20:49.126 subtype: current discovery subsystem 00:20:49.126 treq: not required 00:20:49.126 portid: 0 00:20:49.126 trsvcid: 4420 00:20:49.126 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:49.126 traddr: 10.0.0.2 00:20:49.126 eflags: explicit discovery connections, duplicate discovery information 00:20:49.126 sectype: none 00:20:49.126 =====Discovery Log Entry 1====== 00:20:49.126 trtype: tcp 00:20:49.126 adrfam: ipv4 00:20:49.126 subtype: nvme subsystem 00:20:49.126 treq: not required 00:20:49.126 portid: 0 00:20:49.126 trsvcid: 4420 00:20:49.126 subnqn: nqn.2016-06.io.spdk:cnode1 00:20:49.126 traddr: 10.0.0.2 00:20:49.126 eflags: none 00:20:49.126 sectype: none 00:20:49.126 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:20:49.126 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:20:49.126 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:20:49.126 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:20:49.126 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:20:49.126 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:20:49.126 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:20:49.126 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:20:49.127 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:20:49.127 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:20:49.127 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:50.514 17:47:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:50.514 17:47:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:20:50.514 17:47:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:50.514 17:47:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:20:50.514 17:47:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:20:50.514 17:47:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:20:53.059 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:20:53.060 /dev/nvme0n2 ]] 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:20:53.060 17:47:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:53.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:53.321 rmmod nvme_tcp 00:20:53.321 rmmod nvme_fabrics 00:20:53.321 rmmod nvme_keyring 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@513 -- # '[' -n 2651976 ']' 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # killprocess 2651976 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2651976 ']' 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2651976 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2651976 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2651976' 00:20:53.321 killing process with pid 2651976 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2651976 00:20:53.321 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2651976 00:20:53.583 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:53.583 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:53.583 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:53.583 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:20:53.583 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-save 00:20:53.583 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:53.583 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-restore 00:20:53.583 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:53.583 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:53.583 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.583 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.583 17:47:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.496 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:55.496 00:20:55.496 real 0m15.513s 00:20:55.496 user 0m24.138s 00:20:55.496 sys 0m6.448s 00:20:55.496 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:55.496 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:55.496 ************************************ 00:20:55.496 END TEST nvmf_nvme_cli 00:20:55.496 ************************************ 00:20:55.757 17:47:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:20:55.757 17:47:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:20:55.757 17:47:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:55.757 17:47:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:55.758 ************************************ 00:20:55.758 START TEST nvmf_vfio_user 00:20:55.758 ************************************ 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:20:55.758 * Looking for test storage... 00:20:55.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:55.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.758 --rc genhtml_branch_coverage=1 00:20:55.758 --rc genhtml_function_coverage=1 00:20:55.758 --rc genhtml_legend=1 00:20:55.758 --rc geninfo_all_blocks=1 00:20:55.758 --rc geninfo_unexecuted_blocks=1 00:20:55.758 00:20:55.758 ' 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:55.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.758 --rc genhtml_branch_coverage=1 00:20:55.758 --rc genhtml_function_coverage=1 00:20:55.758 --rc genhtml_legend=1 00:20:55.758 --rc geninfo_all_blocks=1 00:20:55.758 --rc geninfo_unexecuted_blocks=1 00:20:55.758 00:20:55.758 ' 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:55.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.758 --rc genhtml_branch_coverage=1 00:20:55.758 --rc genhtml_function_coverage=1 00:20:55.758 --rc genhtml_legend=1 00:20:55.758 --rc geninfo_all_blocks=1 00:20:55.758 --rc geninfo_unexecuted_blocks=1 00:20:55.758 00:20:55.758 ' 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:55.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.758 --rc genhtml_branch_coverage=1 00:20:55.758 --rc genhtml_function_coverage=1 00:20:55.758 --rc genhtml_legend=1 00:20:55.758 --rc geninfo_all_blocks=1 00:20:55.758 --rc geninfo_unexecuted_blocks=1 00:20:55.758 00:20:55.758 ' 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:55.758 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:20:56.019 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.019 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.019 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.019 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.019 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.019 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.019 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:20:56.019 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.019 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:20:56.019 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:56.019 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:56.019 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.019 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.019 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.019 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:56.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:56.019 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:56.019 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2653763 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2653763' 00:20:56.020 Process pid: 2653763 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2653763 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2653763 ']' 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:56.020 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:56.020 [2024-11-20 17:47:55.744213] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:56.020 [2024-11-20 17:47:55.744284] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.020 [2024-11-20 17:47:55.826004] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.020 [2024-11-20 17:47:55.855639] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.020 [2024-11-20 17:47:55.855674] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.020 [2024-11-20 17:47:55.855680] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.020 [2024-11-20 17:47:55.855684] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.020 [2024-11-20 17:47:55.855688] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.020 [2024-11-20 17:47:55.855832] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.020 [2024-11-20 17:47:55.855989] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.020 [2024-11-20 17:47:55.856004] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:20:56.020 [2024-11-20 17:47:55.856009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.961 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:56.961 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:20:56.961 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:57.902 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:20:57.902 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:57.902 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:57.902 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:57.902 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:57.902 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:58.162 Malloc1 00:20:58.162 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:58.423 17:47:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:58.423 17:47:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:58.684 17:47:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:58.684 17:47:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:58.684 17:47:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:58.945 Malloc2 00:20:58.945 17:47:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:58.945 17:47:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:59.205 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:59.478 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:20:59.478 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:20:59.478 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:59.478 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:20:59.478 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:20:59.478 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:20:59.478 [2024-11-20 17:47:59.243526] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:59.478 [2024-11-20 17:47:59.243577] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2654441 ] 00:20:59.478 [2024-11-20 17:47:59.271265] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:20:59.478 [2024-11-20 17:47:59.279426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:59.478 [2024-11-20 17:47:59.279440] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f27623bd000 00:20:59.478 [2024-11-20 17:47:59.280429] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:59.478 [2024-11-20 17:47:59.281421] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:59.478 [2024-11-20 17:47:59.282432] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:59.478 [2024-11-20 17:47:59.283442] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:59.478 [2024-11-20 17:47:59.284439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:59.478 [2024-11-20 17:47:59.285449] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:59.478 [2024-11-20 17:47:59.286453] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:59.479 [2024-11-20 17:47:59.287456] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:59.479 [2024-11-20 17:47:59.288455] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:59.479 [2024-11-20 17:47:59.288462] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f27610c6000 00:20:59.479 [2024-11-20 17:47:59.289377] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:59.479 [2024-11-20 17:47:59.302825] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:20:59.479 [2024-11-20 17:47:59.302844] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:20:59.479 [2024-11-20 17:47:59.305568] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:20:59.479 [2024-11-20 17:47:59.305602] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:20:59.479 [2024-11-20 17:47:59.305663] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:20:59.479 [2024-11-20 17:47:59.305676] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:20:59.479 [2024-11-20 17:47:59.305679] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:20:59.479 [2024-11-20 17:47:59.306568] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:20:59.479 [2024-11-20 17:47:59.306574] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:20:59.479 [2024-11-20 17:47:59.306579] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:20:59.479 [2024-11-20 17:47:59.307569] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:20:59.479 [2024-11-20 17:47:59.307575] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:20:59.479 [2024-11-20 17:47:59.307581] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:20:59.479 [2024-11-20 17:47:59.308574] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:20:59.479 [2024-11-20 17:47:59.308582] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:59.479 [2024-11-20 17:47:59.309577] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:20:59.479 [2024-11-20 17:47:59.309582] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:20:59.479 [2024-11-20 17:47:59.309586] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:20:59.479 [2024-11-20 17:47:59.309591] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:59.479 [2024-11-20 17:47:59.309695] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:20:59.479 [2024-11-20 17:47:59.309698] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:59.479 [2024-11-20 17:47:59.309701] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:20:59.479 [2024-11-20 17:47:59.310590] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:20:59.479 [2024-11-20 17:47:59.311590] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:20:59.479 [2024-11-20 17:47:59.312599] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:20:59.479 [2024-11-20 17:47:59.313597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:59.479 [2024-11-20 17:47:59.313649] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:59.479 [2024-11-20 17:47:59.314607] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:20:59.479 [2024-11-20 17:47:59.314612] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:59.479 [2024-11-20 17:47:59.314615] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.314630] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:20:59.479 [2024-11-20 17:47:59.314636] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.314646] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:59.479 [2024-11-20 17:47:59.314650] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:59.479 [2024-11-20 17:47:59.314653] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:59.479 [2024-11-20 17:47:59.314662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:59.479 [2024-11-20 17:47:59.314698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:20:59.479 [2024-11-20 17:47:59.314705] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:20:59.479 [2024-11-20 17:47:59.314708] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:20:59.479 [2024-11-20 17:47:59.314712] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:20:59.479 [2024-11-20 17:47:59.314716] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:20:59.479 [2024-11-20 17:47:59.314719] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:20:59.479 [2024-11-20 17:47:59.314722] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:20:59.479 [2024-11-20 17:47:59.314725] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.314731] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.314738] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:20:59.479 [2024-11-20 17:47:59.314750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:20:59.479 [2024-11-20 17:47:59.314758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.479 [2024-11-20 17:47:59.314764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.479 [2024-11-20 17:47:59.314771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.479 [2024-11-20 17:47:59.314777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.479 [2024-11-20 17:47:59.314780] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.314786] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.314793] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:20:59.479 [2024-11-20 17:47:59.314801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:20:59.479 [2024-11-20 17:47:59.314805] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:20:59.479 [2024-11-20 17:47:59.314808] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.314813] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.314818] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.314825] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:59.479 [2024-11-20 17:47:59.314839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:20:59.479 [2024-11-20 17:47:59.314883] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.314888] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.314894] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:20:59.479 [2024-11-20 17:47:59.314898] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:20:59.479 [2024-11-20 17:47:59.314900] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:59.479 [2024-11-20 17:47:59.314905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:20:59.479 [2024-11-20 17:47:59.314916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:20:59.479 [2024-11-20 17:47:59.314922] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:20:59.479 [2024-11-20 17:47:59.314930] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.314936] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.314941] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:59.479 [2024-11-20 17:47:59.314944] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:59.479 [2024-11-20 17:47:59.314946] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:59.479 [2024-11-20 17:47:59.314950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:59.479 [2024-11-20 17:47:59.314970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:20:59.479 [2024-11-20 17:47:59.314979] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.314985] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.314989] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:59.479 [2024-11-20 17:47:59.314992] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:59.479 [2024-11-20 17:47:59.314994] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:59.479 [2024-11-20 17:47:59.314999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:59.479 [2024-11-20 17:47:59.315010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:20:59.479 [2024-11-20 17:47:59.315015] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.315020] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.315025] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.315030] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.315033] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.315037] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.315040] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:20:59.479 [2024-11-20 17:47:59.315045] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:20:59.479 [2024-11-20 17:47:59.315048] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:20:59.479 [2024-11-20 17:47:59.315062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:20:59.480 [2024-11-20 17:47:59.315069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:20:59.480 [2024-11-20 17:47:59.315077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:20:59.480 [2024-11-20 17:47:59.315087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:20:59.480 [2024-11-20 17:47:59.315095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:20:59.480 [2024-11-20 17:47:59.315107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:20:59.480 [2024-11-20 17:47:59.315115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:59.480 [2024-11-20 17:47:59.315124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:20:59.480 [2024-11-20 17:47:59.315133] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:20:59.480 [2024-11-20 17:47:59.315136] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:20:59.480 [2024-11-20 17:47:59.315139] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:20:59.480 [2024-11-20 17:47:59.315141] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:20:59.480 [2024-11-20 17:47:59.315143] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:20:59.480 [2024-11-20 17:47:59.315148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:20:59.480 [2024-11-20 17:47:59.315153] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:20:59.480 [2024-11-20 17:47:59.315156] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:20:59.480 [2024-11-20 17:47:59.315162] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:59.480 [2024-11-20 17:47:59.315166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:20:59.480 [2024-11-20 17:47:59.315171] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:20:59.480 [2024-11-20 17:47:59.315174] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:59.480 [2024-11-20 17:47:59.315177] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:59.480 [2024-11-20 17:47:59.315181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:59.480 [2024-11-20 17:47:59.315186] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:20:59.480 [2024-11-20 17:47:59.315189] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:20:59.480 [2024-11-20 17:47:59.315191] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:59.480 [2024-11-20 17:47:59.315195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:20:59.480 [2024-11-20 17:47:59.315201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:20:59.480 [2024-11-20 17:47:59.315210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:20:59.480 [2024-11-20 17:47:59.315217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:20:59.480 [2024-11-20 17:47:59.315222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:20:59.480 ===================================================== 00:20:59.480 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:59.480 ===================================================== 00:20:59.480 Controller Capabilities/Features 00:20:59.480 ================================ 00:20:59.480 Vendor ID: 4e58 00:20:59.480 Subsystem Vendor ID: 4e58 00:20:59.480 Serial Number: SPDK1 00:20:59.480 Model Number: SPDK bdev Controller 00:20:59.480 Firmware Version: 24.09.1 00:20:59.480 Recommended Arb Burst: 6 00:20:59.480 IEEE OUI Identifier: 8d 6b 50 00:20:59.480 Multi-path I/O 00:20:59.480 May have multiple subsystem ports: Yes 00:20:59.480 May have multiple controllers: Yes 00:20:59.480 Associated with SR-IOV VF: No 00:20:59.480 Max Data Transfer Size: 131072 00:20:59.480 Max Number of Namespaces: 32 00:20:59.480 Max Number of I/O Queues: 127 00:20:59.480 NVMe Specification Version (VS): 1.3 00:20:59.480 NVMe Specification Version (Identify): 1.3 00:20:59.480 Maximum Queue Entries: 256 00:20:59.480 Contiguous Queues Required: Yes 00:20:59.480 Arbitration Mechanisms Supported 00:20:59.480 Weighted Round Robin: Not Supported 00:20:59.480 Vendor Specific: Not Supported 00:20:59.480 Reset Timeout: 15000 ms 00:20:59.480 Doorbell Stride: 4 bytes 00:20:59.480 NVM Subsystem Reset: Not Supported 00:20:59.480 Command Sets Supported 00:20:59.480 NVM Command Set: Supported 00:20:59.480 Boot Partition: Not Supported 00:20:59.480 Memory Page Size Minimum: 4096 bytes 00:20:59.480 Memory Page Size Maximum: 4096 bytes 00:20:59.480 Persistent Memory Region: Not Supported 00:20:59.480 Optional Asynchronous Events Supported 00:20:59.480 Namespace Attribute Notices: Supported 00:20:59.480 Firmware Activation Notices: Not Supported 00:20:59.480 ANA Change Notices: Not Supported 00:20:59.480 PLE Aggregate Log Change Notices: Not Supported 00:20:59.480 LBA Status Info Alert Notices: Not Supported 00:20:59.480 EGE Aggregate Log Change Notices: Not Supported 00:20:59.480 Normal NVM Subsystem Shutdown event: Not Supported 00:20:59.480 Zone Descriptor Change Notices: Not Supported 00:20:59.480 Discovery Log Change Notices: Not Supported 00:20:59.480 Controller Attributes 00:20:59.480 128-bit Host Identifier: Supported 00:20:59.480 Non-Operational Permissive Mode: Not Supported 00:20:59.480 NVM Sets: Not Supported 00:20:59.480 Read Recovery Levels: Not Supported 00:20:59.480 Endurance Groups: Not Supported 00:20:59.480 Predictable Latency Mode: Not Supported 00:20:59.480 Traffic Based Keep ALive: Not Supported 00:20:59.480 Namespace Granularity: Not Supported 00:20:59.480 SQ Associations: Not Supported 00:20:59.480 UUID List: Not Supported 00:20:59.480 Multi-Domain Subsystem: Not Supported 00:20:59.480 Fixed Capacity Management: Not Supported 00:20:59.480 Variable Capacity Management: Not Supported 00:20:59.480 Delete Endurance Group: Not Supported 00:20:59.480 Delete NVM Set: Not Supported 00:20:59.480 Extended LBA Formats Supported: Not Supported 00:20:59.480 Flexible Data Placement Supported: Not Supported 00:20:59.480 00:20:59.480 Controller Memory Buffer Support 00:20:59.480 ================================ 00:20:59.480 Supported: No 00:20:59.480 00:20:59.480 Persistent Memory Region Support 00:20:59.480 ================================ 00:20:59.480 Supported: No 00:20:59.480 00:20:59.480 Admin Command Set Attributes 00:20:59.480 ============================ 00:20:59.480 Security Send/Receive: Not Supported 00:20:59.480 Format NVM: Not Supported 00:20:59.480 Firmware Activate/Download: Not Supported 00:20:59.480 Namespace Management: Not Supported 00:20:59.480 Device Self-Test: Not Supported 00:20:59.480 Directives: Not Supported 00:20:59.480 NVMe-MI: Not Supported 00:20:59.480 Virtualization Management: Not Supported 00:20:59.480 Doorbell Buffer Config: Not Supported 00:20:59.480 Get LBA Status Capability: Not Supported 00:20:59.480 Command & Feature Lockdown Capability: Not Supported 00:20:59.480 Abort Command Limit: 4 00:20:59.480 Async Event Request Limit: 4 00:20:59.480 Number of Firmware Slots: N/A 00:20:59.480 Firmware Slot 1 Read-Only: N/A 00:20:59.480 Firmware Activation Without Reset: N/A 00:20:59.480 Multiple Update Detection Support: N/A 00:20:59.480 Firmware Update Granularity: No Information Provided 00:20:59.480 Per-Namespace SMART Log: No 00:20:59.480 Asymmetric Namespace Access Log Page: Not Supported 00:20:59.480 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:20:59.480 Command Effects Log Page: Supported 00:20:59.480 Get Log Page Extended Data: Supported 00:20:59.480 Telemetry Log Pages: Not Supported 00:20:59.480 Persistent Event Log Pages: Not Supported 00:20:59.480 Supported Log Pages Log Page: May Support 00:20:59.480 Commands Supported & Effects Log Page: Not Supported 00:20:59.480 Feature Identifiers & Effects Log Page:May Support 00:20:59.480 NVMe-MI Commands & Effects Log Page: May Support 00:20:59.480 Data Area 4 for Telemetry Log: Not Supported 00:20:59.480 Error Log Page Entries Supported: 128 00:20:59.480 Keep Alive: Supported 00:20:59.480 Keep Alive Granularity: 10000 ms 00:20:59.480 00:20:59.480 NVM Command Set Attributes 00:20:59.480 ========================== 00:20:59.480 Submission Queue Entry Size 00:20:59.480 Max: 64 00:20:59.480 Min: 64 00:20:59.480 Completion Queue Entry Size 00:20:59.480 Max: 16 00:20:59.480 Min: 16 00:20:59.480 Number of Namespaces: 32 00:20:59.480 Compare Command: Supported 00:20:59.480 Write Uncorrectable Command: Not Supported 00:20:59.481 Dataset Management Command: Supported 00:20:59.481 Write Zeroes Command: Supported 00:20:59.481 Set Features Save Field: Not Supported 00:20:59.481 Reservations: Not Supported 00:20:59.481 Timestamp: Not Supported 00:20:59.481 Copy: Supported 00:20:59.481 Volatile Write Cache: Present 00:20:59.481 Atomic Write Unit (Normal): 1 00:20:59.481 Atomic Write Unit (PFail): 1 00:20:59.481 Atomic Compare & Write Unit: 1 00:20:59.481 Fused Compare & Write: Supported 00:20:59.481 Scatter-Gather List 00:20:59.481 SGL Command Set: Supported (Dword aligned) 00:20:59.481 SGL Keyed: Not Supported 00:20:59.481 SGL Bit Bucket Descriptor: Not Supported 00:20:59.481 SGL Metadata Pointer: Not Supported 00:20:59.481 Oversized SGL: Not Supported 00:20:59.481 SGL Metadata Address: Not Supported 00:20:59.481 SGL Offset: Not Supported 00:20:59.481 Transport SGL Data Block: Not Supported 00:20:59.481 Replay Protected Memory Block: Not Supported 00:20:59.481 00:20:59.481 Firmware Slot Information 00:20:59.481 ========================= 00:20:59.481 Active slot: 1 00:20:59.481 Slot 1 Firmware Revision: 24.09.1 00:20:59.481 00:20:59.481 00:20:59.481 Commands Supported and Effects 00:20:59.481 ============================== 00:20:59.481 Admin Commands 00:20:59.481 -------------- 00:20:59.481 Get Log Page (02h): Supported 00:20:59.481 Identify (06h): Supported 00:20:59.481 Abort (08h): Supported 00:20:59.481 Set Features (09h): Supported 00:20:59.481 Get Features (0Ah): Supported 00:20:59.481 Asynchronous Event Request (0Ch): Supported 00:20:59.481 Keep Alive (18h): Supported 00:20:59.481 I/O Commands 00:20:59.481 ------------ 00:20:59.481 Flush (00h): Supported LBA-Change 00:20:59.481 Write (01h): Supported LBA-Change 00:20:59.481 Read (02h): Supported 00:20:59.481 Compare (05h): Supported 00:20:59.481 Write Zeroes (08h): Supported LBA-Change 00:20:59.481 Dataset Management (09h): Supported LBA-Change 00:20:59.481 Copy (19h): Supported LBA-Change 00:20:59.481 00:20:59.481 Error Log 00:20:59.481 ========= 00:20:59.481 00:20:59.481 Arbitration 00:20:59.481 =========== 00:20:59.481 Arbitration Burst: 1 00:20:59.481 00:20:59.481 Power Management 00:20:59.481 ================ 00:20:59.481 Number of Power States: 1 00:20:59.481 Current Power State: Power State #0 00:20:59.481 Power State #0: 00:20:59.481 Max Power: 0.00 W 00:20:59.481 Non-Operational State: Operational 00:20:59.481 Entry Latency: Not Reported 00:20:59.481 Exit Latency: Not Reported 00:20:59.481 Relative Read Throughput: 0 00:20:59.481 Relative Read Latency: 0 00:20:59.481 Relative Write Throughput: 0 00:20:59.481 Relative Write Latency: 0 00:20:59.481 Idle Power: Not Reported 00:20:59.481 Active Power: Not Reported 00:20:59.481 Non-Operational Permissive Mode: Not Supported 00:20:59.481 00:20:59.481 Health Information 00:20:59.481 ================== 00:20:59.481 Critical Warnings: 00:20:59.481 Available Spare Space: OK 00:20:59.481 Temperature: OK 00:20:59.481 Device Reliability: OK 00:20:59.481 Read Only: No 00:20:59.481 Volatile Memory Backup: OK 00:20:59.481 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:59.481 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:59.481 Available Spare: 0% 00:20:59.481 Availabl[2024-11-20 17:47:59.315295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:20:59.481 [2024-11-20 17:47:59.315304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:20:59.481 [2024-11-20 17:47:59.315324] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:20:59.481 [2024-11-20 17:47:59.315330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:59.481 [2024-11-20 17:47:59.315335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:59.481 [2024-11-20 17:47:59.315339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:59.481 [2024-11-20 17:47:59.315344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:59.481 [2024-11-20 17:47:59.315613] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:20:59.481 [2024-11-20 17:47:59.315621] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:20:59.481 [2024-11-20 17:47:59.316618] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:59.481 [2024-11-20 17:47:59.316655] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:20:59.481 [2024-11-20 17:47:59.316660] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:20:59.481 [2024-11-20 17:47:59.321162] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:20:59.481 [2024-11-20 17:47:59.321170] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 4 milliseconds 00:20:59.481 [2024-11-20 17:47:59.321231] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:20:59.481 [2024-11-20 17:47:59.322669] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:59.481 e Spare Threshold: 0% 00:20:59.481 Life Percentage Used: 0% 00:20:59.481 Data Units Read: 0 00:20:59.481 Data Units Written: 0 00:20:59.481 Host Read Commands: 0 00:20:59.481 Host Write Commands: 0 00:20:59.481 Controller Busy Time: 0 minutes 00:20:59.481 Power Cycles: 0 00:20:59.481 Power On Hours: 0 hours 00:20:59.481 Unsafe Shutdowns: 0 00:20:59.481 Unrecoverable Media Errors: 0 00:20:59.481 Lifetime Error Log Entries: 0 00:20:59.481 Warning Temperature Time: 0 minutes 00:20:59.481 Critical Temperature Time: 0 minutes 00:20:59.481 00:20:59.481 Number of Queues 00:20:59.481 ================ 00:20:59.481 Number of I/O Submission Queues: 127 00:20:59.481 Number of I/O Completion Queues: 127 00:20:59.481 00:20:59.481 Active Namespaces 00:20:59.481 ================= 00:20:59.481 Namespace ID:1 00:20:59.481 Error Recovery Timeout: Unlimited 00:20:59.481 Command Set Identifier: NVM (00h) 00:20:59.481 Deallocate: Supported 00:20:59.481 Deallocated/Unwritten Error: Not Supported 00:20:59.481 Deallocated Read Value: Unknown 00:20:59.481 Deallocate in Write Zeroes: Not Supported 00:20:59.481 Deallocated Guard Field: 0xFFFF 00:20:59.481 Flush: Supported 00:20:59.481 Reservation: Supported 00:20:59.481 Namespace Sharing Capabilities: Multiple Controllers 00:20:59.481 Size (in LBAs): 131072 (0GiB) 00:20:59.481 Capacity (in LBAs): 131072 (0GiB) 00:20:59.481 Utilization (in LBAs): 131072 (0GiB) 00:20:59.481 NGUID: 0DC2E3DD4CB64838AA4700100F6B9E03 00:20:59.481 UUID: 0dc2e3dd-4cb6-4838-aa47-00100f6b9e03 00:20:59.481 Thin Provisioning: Not Supported 00:20:59.481 Per-NS Atomic Units: Yes 00:20:59.481 Atomic Boundary Size (Normal): 0 00:20:59.481 Atomic Boundary Size (PFail): 0 00:20:59.482 Atomic Boundary Offset: 0 00:20:59.482 Maximum Single Source Range Length: 65535 00:20:59.482 Maximum Copy Length: 65535 00:20:59.482 Maximum Source Range Count: 1 00:20:59.482 NGUID/EUI64 Never Reused: No 00:20:59.482 Namespace Write Protected: No 00:20:59.482 Number of LBA Formats: 1 00:20:59.482 Current LBA Format: LBA Format #00 00:20:59.482 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:59.482 00:20:59.482 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:20:59.742 [2024-11-20 17:47:59.488766] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:05.030 Initializing NVMe Controllers 00:21:05.030 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:05.030 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:21:05.030 Initialization complete. Launching workers. 00:21:05.030 ======================================================== 00:21:05.030 Latency(us) 00:21:05.030 Device Information : IOPS MiB/s Average min max 00:21:05.030 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40024.20 156.34 3200.64 844.00 10778.21 00:21:05.030 ======================================================== 00:21:05.030 Total : 40024.20 156.34 3200.64 844.00 10778.21 00:21:05.030 00:21:05.030 [2024-11-20 17:48:04.512768] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:05.030 17:48:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:21:05.030 [2024-11-20 17:48:04.685570] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:10.318 Initializing NVMe Controllers 00:21:10.318 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:10.318 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:21:10.318 Initialization complete. Launching workers. 00:21:10.318 ======================================================== 00:21:10.318 Latency(us) 00:21:10.318 Device Information : IOPS MiB/s Average min max 00:21:10.318 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.67 7625.34 8024.71 00:21:10.318 ======================================================== 00:21:10.318 Total : 16051.20 62.70 7980.67 7625.34 8024.71 00:21:10.318 00:21:10.318 [2024-11-20 17:48:09.719890] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:10.318 17:48:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:21:10.318 [2024-11-20 17:48:09.908688] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:15.603 [2024-11-20 17:48:15.001416] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:15.603 Initializing NVMe Controllers 00:21:15.603 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:15.603 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:15.603 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:21:15.603 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:21:15.603 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:21:15.603 Initialization complete. Launching workers. 00:21:15.603 Starting thread on core 2 00:21:15.603 Starting thread on core 3 00:21:15.603 Starting thread on core 1 00:21:15.603 17:48:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:21:15.603 [2024-11-20 17:48:15.232522] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:18.899 [2024-11-20 17:48:18.293399] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:18.899 Initializing NVMe Controllers 00:21:18.899 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:21:18.899 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:21:18.899 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:21:18.899 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:21:18.899 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:21:18.899 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:21:18.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:21:18.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:21:18.899 Initialization complete. Launching workers. 00:21:18.899 Starting thread on core 1 with urgent priority queue 00:21:18.899 Starting thread on core 2 with urgent priority queue 00:21:18.899 Starting thread on core 3 with urgent priority queue 00:21:18.899 Starting thread on core 0 with urgent priority queue 00:21:18.899 SPDK bdev Controller (SPDK1 ) core 0: 10722.67 IO/s 9.33 secs/100000 ios 00:21:18.899 SPDK bdev Controller (SPDK1 ) core 1: 15078.00 IO/s 6.63 secs/100000 ios 00:21:18.899 SPDK bdev Controller (SPDK1 ) core 2: 8561.67 IO/s 11.68 secs/100000 ios 00:21:18.899 SPDK bdev Controller (SPDK1 ) core 3: 17472.00 IO/s 5.72 secs/100000 ios 00:21:18.899 ======================================================== 00:21:18.899 00:21:18.899 17:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:21:18.899 [2024-11-20 17:48:18.516577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:18.899 Initializing NVMe Controllers 00:21:18.899 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:21:18.899 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:21:18.899 Namespace ID: 1 size: 0GB 00:21:18.899 Initialization complete. 00:21:18.899 INFO: using host memory buffer for IO 00:21:18.899 Hello world! 00:21:18.899 [2024-11-20 17:48:18.549783] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:18.900 17:48:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:21:18.900 [2024-11-20 17:48:18.772589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:20.283 Initializing NVMe Controllers 00:21:20.283 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:21:20.283 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:21:20.283 Initialization complete. Launching workers. 00:21:20.283 submit (in ns) avg, min, max = 6241.3, 2835.8, 3998360.0 00:21:20.283 complete (in ns) avg, min, max = 16413.0, 1620.0, 5991613.3 00:21:20.283 00:21:20.283 Submit histogram 00:21:20.283 ================ 00:21:20.283 Range in us Cumulative Count 00:21:20.283 2.827 - 2.840: 0.0577% ( 12) 00:21:20.283 2.840 - 2.853: 1.1690% ( 231) 00:21:20.283 2.853 - 2.867: 4.2430% ( 639) 00:21:20.283 2.867 - 2.880: 8.6641% ( 919) 00:21:20.283 2.880 - 2.893: 14.4513% ( 1203) 00:21:20.283 2.893 - 2.907: 20.1328% ( 1181) 00:21:20.283 2.907 - 2.920: 26.9255% ( 1412) 00:21:20.283 2.920 - 2.933: 32.7801% ( 1217) 00:21:20.283 2.933 - 2.947: 38.2451% ( 1136) 00:21:20.283 2.947 - 2.960: 43.6282% ( 1119) 00:21:20.283 2.960 - 2.973: 49.6031% ( 1242) 00:21:20.283 2.973 - 2.987: 55.4625% ( 1218) 00:21:20.283 2.987 - 3.000: 64.5788% ( 1895) 00:21:20.283 3.000 - 3.013: 73.1322% ( 1778) 00:21:20.283 3.013 - 3.027: 80.1414% ( 1457) 00:21:20.283 3.027 - 3.040: 86.8524% ( 1395) 00:21:20.283 3.040 - 3.053: 93.1977% ( 1319) 00:21:20.283 3.053 - 3.067: 96.8682% ( 763) 00:21:20.283 3.067 - 3.080: 98.6290% ( 366) 00:21:20.283 3.080 - 3.093: 99.2976% ( 139) 00:21:20.283 3.093 - 3.107: 99.5430% ( 51) 00:21:20.283 3.107 - 3.120: 99.5863% ( 9) 00:21:20.283 3.120 - 3.133: 99.6055% ( 4) 00:21:20.283 3.133 - 3.147: 99.6151% ( 2) 00:21:20.283 3.147 - 3.160: 99.6200% ( 1) 00:21:20.283 3.187 - 3.200: 99.6344% ( 3) 00:21:20.283 3.307 - 3.320: 99.6392% ( 1) 00:21:20.283 3.467 - 3.493: 99.6440% ( 1) 00:21:20.283 3.707 - 3.733: 99.6488% ( 1) 00:21:20.283 3.813 - 3.840: 99.6536% ( 1) 00:21:20.283 3.973 - 4.000: 99.6584% ( 1) 00:21:20.283 4.213 - 4.240: 99.6633% ( 1) 00:21:20.283 4.347 - 4.373: 99.6681% ( 1) 00:21:20.283 4.507 - 4.533: 99.6729% ( 1) 00:21:20.283 4.533 - 4.560: 99.6777% ( 1) 00:21:20.283 4.720 - 4.747: 99.6825% ( 1) 00:21:20.283 4.800 - 4.827: 99.6873% ( 1) 00:21:20.283 4.827 - 4.853: 99.6921% ( 1) 00:21:20.283 4.853 - 4.880: 99.6969% ( 1) 00:21:20.283 4.907 - 4.933: 99.7065% ( 2) 00:21:20.283 4.933 - 4.960: 99.7114% ( 1) 00:21:20.283 4.960 - 4.987: 99.7162% ( 1) 00:21:20.283 4.987 - 5.013: 99.7210% ( 1) 00:21:20.283 5.067 - 5.093: 99.7306% ( 2) 00:21:20.283 5.093 - 5.120: 99.7354% ( 1) 00:21:20.283 5.120 - 5.147: 99.7450% ( 2) 00:21:20.283 5.147 - 5.173: 99.7547% ( 2) 00:21:20.283 5.200 - 5.227: 99.7595% ( 1) 00:21:20.283 5.253 - 5.280: 99.7643% ( 1) 00:21:20.283 5.307 - 5.333: 99.7691% ( 1) 00:21:20.283 5.360 - 5.387: 99.7739% ( 1) 00:21:20.283 5.467 - 5.493: 99.7787% ( 1) 00:21:20.283 5.520 - 5.547: 99.7883% ( 2) 00:21:20.283 5.600 - 5.627: 99.7980% ( 2) 00:21:20.283 5.627 - 5.653: 99.8028% ( 1) 00:21:20.283 5.653 - 5.680: 99.8124% ( 2) 00:21:20.283 5.680 - 5.707: 99.8172% ( 1) 00:21:20.283 5.707 - 5.733: 99.8220% ( 1) 00:21:20.283 5.840 - 5.867: 99.8268% ( 1) 00:21:20.283 5.920 - 5.947: 99.8316% ( 1) 00:21:20.283 5.947 - 5.973: 99.8364% ( 1) 00:21:20.283 6.000 - 6.027: 99.8412% ( 1) 00:21:20.283 6.027 - 6.053: 99.8461% ( 1) 00:21:20.283 6.053 - 6.080: 99.8509% ( 1) 00:21:20.283 6.133 - 6.160: 99.8605% ( 2) 00:21:20.283 6.160 - 6.187: 99.8653% ( 1) 00:21:20.283 6.187 - 6.213: 99.8701% ( 1) 00:21:20.283 6.213 - 6.240: 99.8797% ( 2) 00:21:20.283 6.240 - 6.267: 99.8942% ( 3) 00:21:20.283 6.347 - 6.373: 99.8990% ( 1) 00:21:20.283 6.880 - 6.933: 99.9038% ( 1) 00:21:20.283 7.200 - 7.253: 99.9086% ( 1) 00:21:20.283 9.600 - 9.653: 99.9134% ( 1) 00:21:20.283 10.187 - 10.240: 99.9182% ( 1) 00:21:20.283 3986.773 - 4014.080: 100.0000% ( 17) 00:21:20.283 00:21:20.283 Complete histogram 00:21:20.283 ================== 00:21:20.283 Range in us Cumulative Count 00:21:20.283 1.620 - 1.627: 0.0096% ( 2) 00:21:20.283 1.627 - 1.633: 0.0144% ( 1) 00:21:20.283 1.633 - [2024-11-20 17:48:19.791220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:20.283 1.640: 0.1972% ( 38) 00:21:20.283 1.640 - 1.647: 0.8034% ( 126) 00:21:20.283 1.647 - 1.653: 0.8371% ( 7) 00:21:20.283 1.653 - 1.660: 0.8900% ( 11) 00:21:20.283 1.660 - 1.667: 0.9670% ( 16) 00:21:20.283 1.667 - 1.673: 0.9958% ( 6) 00:21:20.283 1.673 - 1.680: 1.0343% ( 8) 00:21:20.283 1.680 - 1.687: 7.8655% ( 1420) 00:21:20.283 1.687 - 1.693: 45.5669% ( 7837) 00:21:20.283 1.693 - 1.700: 53.0524% ( 1556) 00:21:20.283 1.700 - 1.707: 63.9919% ( 2274) 00:21:20.283 1.707 - 1.720: 79.0109% ( 3122) 00:21:20.283 1.720 - 1.733: 83.1193% ( 854) 00:21:20.283 1.733 - 1.747: 84.1824% ( 221) 00:21:20.283 1.747 - 1.760: 89.2337% ( 1050) 00:21:20.283 1.760 - 1.773: 95.0257% ( 1204) 00:21:20.283 1.773 - 1.787: 98.1046% ( 640) 00:21:20.283 1.787 - 1.800: 99.1148% ( 210) 00:21:20.283 1.800 - 1.813: 99.4179% ( 63) 00:21:20.283 1.813 - 1.827: 99.4660% ( 10) 00:21:20.283 1.853 - 1.867: 99.4708% ( 1) 00:21:20.283 3.680 - 3.707: 99.4756% ( 1) 00:21:20.283 3.867 - 3.893: 99.4804% ( 1) 00:21:20.283 3.973 - 4.000: 99.4901% ( 2) 00:21:20.283 4.080 - 4.107: 99.4949% ( 1) 00:21:20.283 4.133 - 4.160: 99.4997% ( 1) 00:21:20.283 4.160 - 4.187: 99.5045% ( 1) 00:21:20.283 4.213 - 4.240: 99.5093% ( 1) 00:21:20.283 4.267 - 4.293: 99.5141% ( 1) 00:21:20.283 4.293 - 4.320: 99.5237% ( 2) 00:21:20.283 4.320 - 4.347: 99.5286% ( 1) 00:21:20.283 4.373 - 4.400: 99.5430% ( 3) 00:21:20.283 4.453 - 4.480: 99.5526% ( 2) 00:21:20.283 4.480 - 4.507: 99.5574% ( 1) 00:21:20.283 4.507 - 4.533: 99.5622% ( 1) 00:21:20.283 4.587 - 4.613: 99.5670% ( 1) 00:21:20.283 4.693 - 4.720: 99.5767% ( 2) 00:21:20.283 4.747 - 4.773: 99.5815% ( 1) 00:21:20.283 4.800 - 4.827: 99.5911% ( 2) 00:21:20.283 4.827 - 4.853: 99.5959% ( 1) 00:21:20.283 4.853 - 4.880: 99.6007% ( 1) 00:21:20.283 4.933 - 4.960: 99.6055% ( 1) 00:21:20.283 5.440 - 5.467: 99.6103% ( 1) 00:21:20.283 5.520 - 5.547: 99.6151% ( 1) 00:21:20.283 5.547 - 5.573: 99.6248% ( 2) 00:21:20.283 10.613 - 10.667: 99.6296% ( 1) 00:21:20.283 12.907 - 12.960: 99.6344% ( 1) 00:21:20.283 3986.773 - 4014.080: 99.9952% ( 75) 00:21:20.283 5980.160 - 6007.467: 100.0000% ( 1) 00:21:20.283 00:21:20.283 17:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:21:20.283 17:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:21:20.283 17:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:21:20.283 17:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:21:20.283 17:48:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:21:20.283 [ 00:21:20.283 { 00:21:20.283 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:20.283 "subtype": "Discovery", 00:21:20.283 "listen_addresses": [], 00:21:20.283 "allow_any_host": true, 00:21:20.283 "hosts": [] 00:21:20.283 }, 00:21:20.283 { 00:21:20.283 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:21:20.283 "subtype": "NVMe", 00:21:20.283 "listen_addresses": [ 00:21:20.283 { 00:21:20.283 "trtype": "VFIOUSER", 00:21:20.283 "adrfam": "IPv4", 00:21:20.283 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:21:20.283 "trsvcid": "0" 00:21:20.283 } 00:21:20.283 ], 00:21:20.283 "allow_any_host": true, 00:21:20.283 "hosts": [], 00:21:20.283 "serial_number": "SPDK1", 00:21:20.283 "model_number": "SPDK bdev Controller", 00:21:20.283 "max_namespaces": 32, 00:21:20.283 "min_cntlid": 1, 00:21:20.284 "max_cntlid": 65519, 00:21:20.284 "namespaces": [ 00:21:20.284 { 00:21:20.284 "nsid": 1, 00:21:20.284 "bdev_name": "Malloc1", 00:21:20.284 "name": "Malloc1", 00:21:20.284 "nguid": "0DC2E3DD4CB64838AA4700100F6B9E03", 00:21:20.284 "uuid": "0dc2e3dd-4cb6-4838-aa47-00100f6b9e03" 00:21:20.284 } 00:21:20.284 ] 00:21:20.284 }, 00:21:20.284 { 00:21:20.284 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:21:20.284 "subtype": "NVMe", 00:21:20.284 "listen_addresses": [ 00:21:20.284 { 00:21:20.284 "trtype": "VFIOUSER", 00:21:20.284 "adrfam": "IPv4", 00:21:20.284 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:21:20.284 "trsvcid": "0" 00:21:20.284 } 00:21:20.284 ], 00:21:20.284 "allow_any_host": true, 00:21:20.284 "hosts": [], 00:21:20.284 "serial_number": "SPDK2", 00:21:20.284 "model_number": "SPDK bdev Controller", 00:21:20.284 "max_namespaces": 32, 00:21:20.284 "min_cntlid": 1, 00:21:20.284 "max_cntlid": 65519, 00:21:20.284 "namespaces": [ 00:21:20.284 { 00:21:20.284 "nsid": 1, 00:21:20.284 "bdev_name": "Malloc2", 00:21:20.284 "name": "Malloc2", 00:21:20.284 "nguid": "85E894C7BACE40E58AEB8884BBDA68CA", 00:21:20.284 "uuid": "85e894c7-bace-40e5-8aeb-8884bbda68ca" 00:21:20.284 } 00:21:20.284 ] 00:21:20.284 } 00:21:20.284 ] 00:21:20.284 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:20.284 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2658972 00:21:20.284 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:21:20.284 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:21:20.284 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:21:20.284 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:20.284 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:20.284 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:21:20.284 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:21:20.284 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:21:20.284 [2024-11-20 17:48:20.166383] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:20.544 Malloc3 00:21:20.544 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:21:20.544 [2024-11-20 17:48:20.370788] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:20.544 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:21:20.544 Asynchronous Event Request test 00:21:20.544 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:21:20.544 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:21:20.544 Registering asynchronous event callbacks... 00:21:20.544 Starting namespace attribute notice tests for all controllers... 00:21:20.544 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:20.544 aer_cb - Changed Namespace 00:21:20.544 Cleaning up... 00:21:20.805 [ 00:21:20.805 { 00:21:20.805 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:20.805 "subtype": "Discovery", 00:21:20.805 "listen_addresses": [], 00:21:20.805 "allow_any_host": true, 00:21:20.805 "hosts": [] 00:21:20.805 }, 00:21:20.805 { 00:21:20.805 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:21:20.805 "subtype": "NVMe", 00:21:20.805 "listen_addresses": [ 00:21:20.805 { 00:21:20.805 "trtype": "VFIOUSER", 00:21:20.805 "adrfam": "IPv4", 00:21:20.805 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:21:20.805 "trsvcid": "0" 00:21:20.805 } 00:21:20.805 ], 00:21:20.805 "allow_any_host": true, 00:21:20.805 "hosts": [], 00:21:20.805 "serial_number": "SPDK1", 00:21:20.805 "model_number": "SPDK bdev Controller", 00:21:20.805 "max_namespaces": 32, 00:21:20.805 "min_cntlid": 1, 00:21:20.805 "max_cntlid": 65519, 00:21:20.805 "namespaces": [ 00:21:20.805 { 00:21:20.805 "nsid": 1, 00:21:20.805 "bdev_name": "Malloc1", 00:21:20.805 "name": "Malloc1", 00:21:20.805 "nguid": "0DC2E3DD4CB64838AA4700100F6B9E03", 00:21:20.805 "uuid": "0dc2e3dd-4cb6-4838-aa47-00100f6b9e03" 00:21:20.805 }, 00:21:20.805 { 00:21:20.805 "nsid": 2, 00:21:20.805 "bdev_name": "Malloc3", 00:21:20.805 "name": "Malloc3", 00:21:20.805 "nguid": "EE21918E65B04BD5BF9E042595D36811", 00:21:20.805 "uuid": "ee21918e-65b0-4bd5-bf9e-042595d36811" 00:21:20.805 } 00:21:20.805 ] 00:21:20.805 }, 00:21:20.805 { 00:21:20.805 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:21:20.805 "subtype": "NVMe", 00:21:20.805 "listen_addresses": [ 00:21:20.805 { 00:21:20.805 "trtype": "VFIOUSER", 00:21:20.805 "adrfam": "IPv4", 00:21:20.805 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:21:20.805 "trsvcid": "0" 00:21:20.805 } 00:21:20.805 ], 00:21:20.805 "allow_any_host": true, 00:21:20.805 "hosts": [], 00:21:20.805 "serial_number": "SPDK2", 00:21:20.805 "model_number": "SPDK bdev Controller", 00:21:20.805 "max_namespaces": 32, 00:21:20.805 "min_cntlid": 1, 00:21:20.805 "max_cntlid": 65519, 00:21:20.805 "namespaces": [ 00:21:20.805 { 00:21:20.806 "nsid": 1, 00:21:20.806 "bdev_name": "Malloc2", 00:21:20.806 "name": "Malloc2", 00:21:20.806 "nguid": "85E894C7BACE40E58AEB8884BBDA68CA", 00:21:20.806 "uuid": "85e894c7-bace-40e5-8aeb-8884bbda68ca" 00:21:20.806 } 00:21:20.806 ] 00:21:20.806 } 00:21:20.806 ] 00:21:20.806 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2658972 00:21:20.806 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:20.806 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:21:20.806 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:21:20.806 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:21:20.806 [2024-11-20 17:48:20.603964] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:20.806 [2024-11-20 17:48:20.604009] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2658988 ] 00:21:20.806 [2024-11-20 17:48:20.633202] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:21:20.806 [2024-11-20 17:48:20.641324] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:21:20.806 [2024-11-20 17:48:20.641341] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6e24dc9000 00:21:20.806 [2024-11-20 17:48:20.642327] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:20.806 [2024-11-20 17:48:20.643328] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:20.806 [2024-11-20 17:48:20.644334] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:20.806 [2024-11-20 17:48:20.645340] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:20.806 [2024-11-20 17:48:20.646343] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:20.806 [2024-11-20 17:48:20.647349] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:20.806 [2024-11-20 17:48:20.648356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:20.806 [2024-11-20 17:48:20.649364] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:20.806 [2024-11-20 17:48:20.650370] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:21:20.806 [2024-11-20 17:48:20.650377] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6e23ad2000 00:21:20.806 [2024-11-20 17:48:20.651293] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:21:20.806 [2024-11-20 17:48:20.660665] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:21:20.806 [2024-11-20 17:48:20.660686] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:21:20.806 [2024-11-20 17:48:20.665767] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:21:20.806 [2024-11-20 17:48:20.665797] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:21:20.806 [2024-11-20 17:48:20.665857] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:21:20.806 [2024-11-20 17:48:20.665870] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:21:20.806 [2024-11-20 17:48:20.665873] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:21:20.806 [2024-11-20 17:48:20.666775] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:21:20.806 [2024-11-20 17:48:20.666782] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:21:20.806 [2024-11-20 17:48:20.666787] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:21:20.806 [2024-11-20 17:48:20.667783] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:21:20.806 [2024-11-20 17:48:20.667790] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:21:20.806 [2024-11-20 17:48:20.667795] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:21:20.806 [2024-11-20 17:48:20.668787] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:21:20.806 [2024-11-20 17:48:20.668794] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:20.806 [2024-11-20 17:48:20.669801] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:21:20.806 [2024-11-20 17:48:20.669807] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:21:20.806 [2024-11-20 17:48:20.669811] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:21:20.806 [2024-11-20 17:48:20.669815] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:20.806 [2024-11-20 17:48:20.669919] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:21:20.806 [2024-11-20 17:48:20.669922] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:20.806 [2024-11-20 17:48:20.669926] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:21:20.806 [2024-11-20 17:48:20.670806] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:21:20.806 [2024-11-20 17:48:20.671809] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:21:20.806 [2024-11-20 17:48:20.672816] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:21:20.806 [2024-11-20 17:48:20.673819] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:20.806 [2024-11-20 17:48:20.673860] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:20.806 [2024-11-20 17:48:20.674832] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:21:20.806 [2024-11-20 17:48:20.674838] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:20.806 [2024-11-20 17:48:20.674841] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:21:20.806 [2024-11-20 17:48:20.674856] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:21:20.806 [2024-11-20 17:48:20.674861] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:21:20.806 [2024-11-20 17:48:20.674870] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:20.806 [2024-11-20 17:48:20.674873] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:20.806 [2024-11-20 17:48:20.674876] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:20.806 [2024-11-20 17:48:20.674884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:20.806 [2024-11-20 17:48:20.681165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:21:20.806 [2024-11-20 17:48:20.681173] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:21:20.806 [2024-11-20 17:48:20.681177] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:21:20.806 [2024-11-20 17:48:20.681180] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:21:20.806 [2024-11-20 17:48:20.681183] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:21:20.806 [2024-11-20 17:48:20.681187] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:21:20.806 [2024-11-20 17:48:20.681190] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:21:20.806 [2024-11-20 17:48:20.681193] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:21:20.806 [2024-11-20 17:48:20.681199] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:21:20.806 [2024-11-20 17:48:20.681206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:21:20.806 [2024-11-20 17:48:20.689162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:21:20.806 [2024-11-20 17:48:20.689172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.806 [2024-11-20 17:48:20.689178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.806 [2024-11-20 17:48:20.689184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.806 [2024-11-20 17:48:20.689190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.806 [2024-11-20 17:48:20.689193] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:21:20.806 [2024-11-20 17:48:20.689202] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:20.806 [2024-11-20 17:48:20.689209] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:21:20.807 [2024-11-20 17:48:20.697163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:21:20.807 [2024-11-20 17:48:20.697169] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:21:20.807 [2024-11-20 17:48:20.697173] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:20.807 [2024-11-20 17:48:20.697177] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:21:20.807 [2024-11-20 17:48:20.697183] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:21:20.807 [2024-11-20 17:48:20.697189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:21:20.807 [2024-11-20 17:48:20.705163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:21:20.807 [2024-11-20 17:48:20.705211] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:21:20.807 [2024-11-20 17:48:20.705217] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:21:20.807 [2024-11-20 17:48:20.705222] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:21:20.807 [2024-11-20 17:48:20.705225] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:21:20.807 [2024-11-20 17:48:20.705228] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:20.807 [2024-11-20 17:48:20.705232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:21:20.807 [2024-11-20 17:48:20.713172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:21:20.807 [2024-11-20 17:48:20.713180] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:21:20.807 [2024-11-20 17:48:20.713189] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:21:20.807 [2024-11-20 17:48:20.713195] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:21:20.807 [2024-11-20 17:48:20.713200] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:20.807 [2024-11-20 17:48:20.713203] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:20.807 [2024-11-20 17:48:20.713205] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:20.807 [2024-11-20 17:48:20.713210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:21.068 [2024-11-20 17:48:20.721163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:21:21.069 [2024-11-20 17:48:20.721174] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:21.069 [2024-11-20 17:48:20.721182] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:21.069 [2024-11-20 17:48:20.721187] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:21.069 [2024-11-20 17:48:20.721190] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:21.069 [2024-11-20 17:48:20.721192] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:21.069 [2024-11-20 17:48:20.721197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:21.069 [2024-11-20 17:48:20.729163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:21:21.069 [2024-11-20 17:48:20.729169] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:21.069 [2024-11-20 17:48:20.729174] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:21:21.069 [2024-11-20 17:48:20.729179] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:21:21.069 [2024-11-20 17:48:20.729184] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:21:21.069 [2024-11-20 17:48:20.729187] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:21.069 [2024-11-20 17:48:20.729191] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:21:21.069 [2024-11-20 17:48:20.729194] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:21:21.069 [2024-11-20 17:48:20.729197] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:21:21.069 [2024-11-20 17:48:20.729201] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:21:21.069 [2024-11-20 17:48:20.729213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:21:21.069 [2024-11-20 17:48:20.737164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:21:21.069 [2024-11-20 17:48:20.737174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:21:21.069 [2024-11-20 17:48:20.745163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:21:21.069 [2024-11-20 17:48:20.745172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:21:21.069 [2024-11-20 17:48:20.753163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:21:21.069 [2024-11-20 17:48:20.753173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:21:21.069 [2024-11-20 17:48:20.761163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:21:21.069 [2024-11-20 17:48:20.761177] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:21:21.069 [2024-11-20 17:48:20.761180] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:21:21.069 [2024-11-20 17:48:20.761182] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:21:21.069 [2024-11-20 17:48:20.761186] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:21:21.069 [2024-11-20 17:48:20.761189] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:21:21.069 [2024-11-20 17:48:20.761193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:21:21.069 [2024-11-20 17:48:20.761198] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:21:21.069 [2024-11-20 17:48:20.761201] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:21:21.069 [2024-11-20 17:48:20.761204] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:21.069 [2024-11-20 17:48:20.761208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:21:21.069 [2024-11-20 17:48:20.761213] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:21:21.069 [2024-11-20 17:48:20.761216] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:21.069 [2024-11-20 17:48:20.761218] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:21.069 [2024-11-20 17:48:20.761222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:21.069 [2024-11-20 17:48:20.761228] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:21:21.069 [2024-11-20 17:48:20.761231] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:21:21.069 [2024-11-20 17:48:20.761233] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:21:21.069 [2024-11-20 17:48:20.761237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:21:21.069 [2024-11-20 17:48:20.769162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:21:21.069 [2024-11-20 17:48:20.769172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:21:21.069 [2024-11-20 17:48:20.769180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:21:21.069 [2024-11-20 17:48:20.769185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:21:21.069 ===================================================== 00:21:21.069 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:21.069 ===================================================== 00:21:21.069 Controller Capabilities/Features 00:21:21.069 ================================ 00:21:21.069 Vendor ID: 4e58 00:21:21.069 Subsystem Vendor ID: 4e58 00:21:21.069 Serial Number: SPDK2 00:21:21.069 Model Number: SPDK bdev Controller 00:21:21.069 Firmware Version: 24.09.1 00:21:21.069 Recommended Arb Burst: 6 00:21:21.069 IEEE OUI Identifier: 8d 6b 50 00:21:21.069 Multi-path I/O 00:21:21.069 May have multiple subsystem ports: Yes 00:21:21.069 May have multiple controllers: Yes 00:21:21.069 Associated with SR-IOV VF: No 00:21:21.069 Max Data Transfer Size: 131072 00:21:21.069 Max Number of Namespaces: 32 00:21:21.069 Max Number of I/O Queues: 127 00:21:21.069 NVMe Specification Version (VS): 1.3 00:21:21.069 NVMe Specification Version (Identify): 1.3 00:21:21.069 Maximum Queue Entries: 256 00:21:21.069 Contiguous Queues Required: Yes 00:21:21.069 Arbitration Mechanisms Supported 00:21:21.069 Weighted Round Robin: Not Supported 00:21:21.069 Vendor Specific: Not Supported 00:21:21.069 Reset Timeout: 15000 ms 00:21:21.069 Doorbell Stride: 4 bytes 00:21:21.069 NVM Subsystem Reset: Not Supported 00:21:21.069 Command Sets Supported 00:21:21.069 NVM Command Set: Supported 00:21:21.069 Boot Partition: Not Supported 00:21:21.069 Memory Page Size Minimum: 4096 bytes 00:21:21.069 Memory Page Size Maximum: 4096 bytes 00:21:21.069 Persistent Memory Region: Not Supported 00:21:21.069 Optional Asynchronous Events Supported 00:21:21.069 Namespace Attribute Notices: Supported 00:21:21.069 Firmware Activation Notices: Not Supported 00:21:21.069 ANA Change Notices: Not Supported 00:21:21.069 PLE Aggregate Log Change Notices: Not Supported 00:21:21.069 LBA Status Info Alert Notices: Not Supported 00:21:21.069 EGE Aggregate Log Change Notices: Not Supported 00:21:21.069 Normal NVM Subsystem Shutdown event: Not Supported 00:21:21.069 Zone Descriptor Change Notices: Not Supported 00:21:21.069 Discovery Log Change Notices: Not Supported 00:21:21.069 Controller Attributes 00:21:21.069 128-bit Host Identifier: Supported 00:21:21.069 Non-Operational Permissive Mode: Not Supported 00:21:21.069 NVM Sets: Not Supported 00:21:21.069 Read Recovery Levels: Not Supported 00:21:21.069 Endurance Groups: Not Supported 00:21:21.069 Predictable Latency Mode: Not Supported 00:21:21.069 Traffic Based Keep ALive: Not Supported 00:21:21.069 Namespace Granularity: Not Supported 00:21:21.069 SQ Associations: Not Supported 00:21:21.069 UUID List: Not Supported 00:21:21.069 Multi-Domain Subsystem: Not Supported 00:21:21.069 Fixed Capacity Management: Not Supported 00:21:21.069 Variable Capacity Management: Not Supported 00:21:21.069 Delete Endurance Group: Not Supported 00:21:21.069 Delete NVM Set: Not Supported 00:21:21.069 Extended LBA Formats Supported: Not Supported 00:21:21.069 Flexible Data Placement Supported: Not Supported 00:21:21.069 00:21:21.069 Controller Memory Buffer Support 00:21:21.069 ================================ 00:21:21.069 Supported: No 00:21:21.069 00:21:21.069 Persistent Memory Region Support 00:21:21.069 ================================ 00:21:21.069 Supported: No 00:21:21.069 00:21:21.069 Admin Command Set Attributes 00:21:21.069 ============================ 00:21:21.069 Security Send/Receive: Not Supported 00:21:21.069 Format NVM: Not Supported 00:21:21.069 Firmware Activate/Download: Not Supported 00:21:21.069 Namespace Management: Not Supported 00:21:21.070 Device Self-Test: Not Supported 00:21:21.070 Directives: Not Supported 00:21:21.070 NVMe-MI: Not Supported 00:21:21.070 Virtualization Management: Not Supported 00:21:21.070 Doorbell Buffer Config: Not Supported 00:21:21.070 Get LBA Status Capability: Not Supported 00:21:21.070 Command & Feature Lockdown Capability: Not Supported 00:21:21.070 Abort Command Limit: 4 00:21:21.070 Async Event Request Limit: 4 00:21:21.070 Number of Firmware Slots: N/A 00:21:21.070 Firmware Slot 1 Read-Only: N/A 00:21:21.070 Firmware Activation Without Reset: N/A 00:21:21.070 Multiple Update Detection Support: N/A 00:21:21.070 Firmware Update Granularity: No Information Provided 00:21:21.070 Per-Namespace SMART Log: No 00:21:21.070 Asymmetric Namespace Access Log Page: Not Supported 00:21:21.070 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:21:21.070 Command Effects Log Page: Supported 00:21:21.070 Get Log Page Extended Data: Supported 00:21:21.070 Telemetry Log Pages: Not Supported 00:21:21.070 Persistent Event Log Pages: Not Supported 00:21:21.070 Supported Log Pages Log Page: May Support 00:21:21.070 Commands Supported & Effects Log Page: Not Supported 00:21:21.070 Feature Identifiers & Effects Log Page:May Support 00:21:21.070 NVMe-MI Commands & Effects Log Page: May Support 00:21:21.070 Data Area 4 for Telemetry Log: Not Supported 00:21:21.070 Error Log Page Entries Supported: 128 00:21:21.070 Keep Alive: Supported 00:21:21.070 Keep Alive Granularity: 10000 ms 00:21:21.070 00:21:21.070 NVM Command Set Attributes 00:21:21.070 ========================== 00:21:21.070 Submission Queue Entry Size 00:21:21.070 Max: 64 00:21:21.070 Min: 64 00:21:21.070 Completion Queue Entry Size 00:21:21.070 Max: 16 00:21:21.070 Min: 16 00:21:21.070 Number of Namespaces: 32 00:21:21.070 Compare Command: Supported 00:21:21.070 Write Uncorrectable Command: Not Supported 00:21:21.070 Dataset Management Command: Supported 00:21:21.070 Write Zeroes Command: Supported 00:21:21.070 Set Features Save Field: Not Supported 00:21:21.070 Reservations: Not Supported 00:21:21.070 Timestamp: Not Supported 00:21:21.070 Copy: Supported 00:21:21.070 Volatile Write Cache: Present 00:21:21.070 Atomic Write Unit (Normal): 1 00:21:21.070 Atomic Write Unit (PFail): 1 00:21:21.070 Atomic Compare & Write Unit: 1 00:21:21.070 Fused Compare & Write: Supported 00:21:21.070 Scatter-Gather List 00:21:21.070 SGL Command Set: Supported (Dword aligned) 00:21:21.070 SGL Keyed: Not Supported 00:21:21.070 SGL Bit Bucket Descriptor: Not Supported 00:21:21.070 SGL Metadata Pointer: Not Supported 00:21:21.070 Oversized SGL: Not Supported 00:21:21.070 SGL Metadata Address: Not Supported 00:21:21.070 SGL Offset: Not Supported 00:21:21.070 Transport SGL Data Block: Not Supported 00:21:21.070 Replay Protected Memory Block: Not Supported 00:21:21.070 00:21:21.070 Firmware Slot Information 00:21:21.070 ========================= 00:21:21.070 Active slot: 1 00:21:21.070 Slot 1 Firmware Revision: 24.09.1 00:21:21.070 00:21:21.070 00:21:21.070 Commands Supported and Effects 00:21:21.070 ============================== 00:21:21.070 Admin Commands 00:21:21.070 -------------- 00:21:21.070 Get Log Page (02h): Supported 00:21:21.070 Identify (06h): Supported 00:21:21.070 Abort (08h): Supported 00:21:21.070 Set Features (09h): Supported 00:21:21.070 Get Features (0Ah): Supported 00:21:21.070 Asynchronous Event Request (0Ch): Supported 00:21:21.070 Keep Alive (18h): Supported 00:21:21.070 I/O Commands 00:21:21.070 ------------ 00:21:21.070 Flush (00h): Supported LBA-Change 00:21:21.070 Write (01h): Supported LBA-Change 00:21:21.070 Read (02h): Supported 00:21:21.070 Compare (05h): Supported 00:21:21.070 Write Zeroes (08h): Supported LBA-Change 00:21:21.070 Dataset Management (09h): Supported LBA-Change 00:21:21.070 Copy (19h): Supported LBA-Change 00:21:21.070 00:21:21.070 Error Log 00:21:21.070 ========= 00:21:21.070 00:21:21.070 Arbitration 00:21:21.070 =========== 00:21:21.070 Arbitration Burst: 1 00:21:21.070 00:21:21.070 Power Management 00:21:21.070 ================ 00:21:21.070 Number of Power States: 1 00:21:21.070 Current Power State: Power State #0 00:21:21.070 Power State #0: 00:21:21.070 Max Power: 0.00 W 00:21:21.070 Non-Operational State: Operational 00:21:21.070 Entry Latency: Not Reported 00:21:21.070 Exit Latency: Not Reported 00:21:21.070 Relative Read Throughput: 0 00:21:21.070 Relative Read Latency: 0 00:21:21.070 Relative Write Throughput: 0 00:21:21.070 Relative Write Latency: 0 00:21:21.070 Idle Power: Not Reported 00:21:21.070 Active Power: Not Reported 00:21:21.070 Non-Operational Permissive Mode: Not Supported 00:21:21.070 00:21:21.070 Health Information 00:21:21.070 ================== 00:21:21.070 Critical Warnings: 00:21:21.070 Available Spare Space: OK 00:21:21.070 Temperature: OK 00:21:21.070 Device Reliability: OK 00:21:21.070 Read Only: No 00:21:21.070 Volatile Memory Backup: OK 00:21:21.070 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:21.070 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:21.070 Available Spare: 0% 00:21:21.070 Availabl[2024-11-20 17:48:20.769254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:21:21.070 [2024-11-20 17:48:20.777164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:21:21.070 [2024-11-20 17:48:20.777187] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:21:21.070 [2024-11-20 17:48:20.777193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.070 [2024-11-20 17:48:20.777198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.070 [2024-11-20 17:48:20.777202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.070 [2024-11-20 17:48:20.777212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.070 [2024-11-20 17:48:20.777254] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:21:21.070 [2024-11-20 17:48:20.777262] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:21:21.070 [2024-11-20 17:48:20.778253] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:21.070 [2024-11-20 17:48:20.778288] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:21:21.070 [2024-11-20 17:48:20.778293] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:21:21.070 [2024-11-20 17:48:20.779260] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:21:21.070 [2024-11-20 17:48:20.779269] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:21:21.070 [2024-11-20 17:48:20.779321] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:21:21.070 [2024-11-20 17:48:20.780281] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:21:21.070 e Spare Threshold: 0% 00:21:21.070 Life Percentage Used: 0% 00:21:21.070 Data Units Read: 0 00:21:21.070 Data Units Written: 0 00:21:21.070 Host Read Commands: 0 00:21:21.070 Host Write Commands: 0 00:21:21.070 Controller Busy Time: 0 minutes 00:21:21.070 Power Cycles: 0 00:21:21.070 Power On Hours: 0 hours 00:21:21.070 Unsafe Shutdowns: 0 00:21:21.070 Unrecoverable Media Errors: 0 00:21:21.070 Lifetime Error Log Entries: 0 00:21:21.070 Warning Temperature Time: 0 minutes 00:21:21.070 Critical Temperature Time: 0 minutes 00:21:21.070 00:21:21.070 Number of Queues 00:21:21.070 ================ 00:21:21.070 Number of I/O Submission Queues: 127 00:21:21.070 Number of I/O Completion Queues: 127 00:21:21.070 00:21:21.070 Active Namespaces 00:21:21.070 ================= 00:21:21.070 Namespace ID:1 00:21:21.070 Error Recovery Timeout: Unlimited 00:21:21.070 Command Set Identifier: NVM (00h) 00:21:21.070 Deallocate: Supported 00:21:21.070 Deallocated/Unwritten Error: Not Supported 00:21:21.070 Deallocated Read Value: Unknown 00:21:21.070 Deallocate in Write Zeroes: Not Supported 00:21:21.070 Deallocated Guard Field: 0xFFFF 00:21:21.070 Flush: Supported 00:21:21.070 Reservation: Supported 00:21:21.070 Namespace Sharing Capabilities: Multiple Controllers 00:21:21.071 Size (in LBAs): 131072 (0GiB) 00:21:21.071 Capacity (in LBAs): 131072 (0GiB) 00:21:21.071 Utilization (in LBAs): 131072 (0GiB) 00:21:21.071 NGUID: 85E894C7BACE40E58AEB8884BBDA68CA 00:21:21.071 UUID: 85e894c7-bace-40e5-8aeb-8884bbda68ca 00:21:21.071 Thin Provisioning: Not Supported 00:21:21.071 Per-NS Atomic Units: Yes 00:21:21.071 Atomic Boundary Size (Normal): 0 00:21:21.071 Atomic Boundary Size (PFail): 0 00:21:21.071 Atomic Boundary Offset: 0 00:21:21.071 Maximum Single Source Range Length: 65535 00:21:21.071 Maximum Copy Length: 65535 00:21:21.071 Maximum Source Range Count: 1 00:21:21.071 NGUID/EUI64 Never Reused: No 00:21:21.071 Namespace Write Protected: No 00:21:21.071 Number of LBA Formats: 1 00:21:21.071 Current LBA Format: LBA Format #00 00:21:21.071 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:21.071 00:21:21.071 17:48:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:21:21.071 [2024-11-20 17:48:20.949527] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:26.359 Initializing NVMe Controllers 00:21:26.359 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:26.359 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:21:26.359 Initialization complete. Launching workers. 00:21:26.359 ======================================================== 00:21:26.359 Latency(us) 00:21:26.359 Device Information : IOPS MiB/s Average min max 00:21:26.359 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39986.13 156.20 3200.79 838.92 9791.68 00:21:26.359 ======================================================== 00:21:26.359 Total : 39986.13 156.20 3200.79 838.92 9791.68 00:21:26.359 00:21:26.359 [2024-11-20 17:48:26.054371] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:26.359 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:21:26.359 [2024-11-20 17:48:26.228870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:31.695 Initializing NVMe Controllers 00:21:31.695 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:31.695 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:21:31.695 Initialization complete. Launching workers. 00:21:31.695 ======================================================== 00:21:31.695 Latency(us) 00:21:31.695 Device Information : IOPS MiB/s Average min max 00:21:31.695 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40045.82 156.43 3196.22 849.71 8462.69 00:21:31.695 ======================================================== 00:21:31.695 Total : 40045.82 156.43 3196.22 849.71 8462.69 00:21:31.695 00:21:31.695 [2024-11-20 17:48:31.249544] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:31.695 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:21:31.695 [2024-11-20 17:48:31.442721] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:36.984 [2024-11-20 17:48:36.572271] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:36.984 Initializing NVMe Controllers 00:21:36.984 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:36.984 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:36.984 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:21:36.984 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:21:36.984 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:21:36.984 Initialization complete. Launching workers. 00:21:36.984 Starting thread on core 2 00:21:36.984 Starting thread on core 3 00:21:36.984 Starting thread on core 1 00:21:36.984 17:48:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:21:36.984 [2024-11-20 17:48:36.804579] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:40.416 [2024-11-20 17:48:39.857233] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:40.416 Initializing NVMe Controllers 00:21:40.416 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:21:40.416 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:21:40.416 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:21:40.416 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:21:40.416 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:21:40.416 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:21:40.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:21:40.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:21:40.416 Initialization complete. Launching workers. 00:21:40.416 Starting thread on core 1 with urgent priority queue 00:21:40.416 Starting thread on core 2 with urgent priority queue 00:21:40.416 Starting thread on core 3 with urgent priority queue 00:21:40.416 Starting thread on core 0 with urgent priority queue 00:21:40.416 SPDK bdev Controller (SPDK2 ) core 0: 13176.67 IO/s 7.59 secs/100000 ios 00:21:40.416 SPDK bdev Controller (SPDK2 ) core 1: 9242.33 IO/s 10.82 secs/100000 ios 00:21:40.416 SPDK bdev Controller (SPDK2 ) core 2: 17141.00 IO/s 5.83 secs/100000 ios 00:21:40.416 SPDK bdev Controller (SPDK2 ) core 3: 14104.67 IO/s 7.09 secs/100000 ios 00:21:40.416 ======================================================== 00:21:40.416 00:21:40.416 17:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:21:40.416 [2024-11-20 17:48:40.084538] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:40.416 Initializing NVMe Controllers 00:21:40.416 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:21:40.416 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:21:40.416 Namespace ID: 1 size: 0GB 00:21:40.416 Initialization complete. 00:21:40.416 INFO: using host memory buffer for IO 00:21:40.416 Hello world! 00:21:40.416 [2024-11-20 17:48:40.094603] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:40.416 17:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:21:40.416 [2024-11-20 17:48:40.315221] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:41.842 Initializing NVMe Controllers 00:21:41.842 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:21:41.842 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:21:41.842 Initialization complete. Launching workers. 00:21:41.842 submit (in ns) avg, min, max = 5259.6, 2820.0, 3998811.7 00:21:41.842 complete (in ns) avg, min, max = 16052.9, 1628.3, 5991430.8 00:21:41.842 00:21:41.842 Submit histogram 00:21:41.842 ================ 00:21:41.842 Range in us Cumulative Count 00:21:41.842 2.813 - 2.827: 0.2109% ( 44) 00:21:41.842 2.827 - 2.840: 1.4093% ( 250) 00:21:41.842 2.840 - 2.853: 4.3334% ( 610) 00:21:41.842 2.853 - 2.867: 10.3495% ( 1255) 00:21:41.842 2.867 - 2.880: 15.3828% ( 1050) 00:21:41.842 2.880 - 2.893: 20.6606% ( 1101) 00:21:41.842 2.893 - 2.907: 25.2625% ( 960) 00:21:41.842 2.907 - 2.920: 30.9333% ( 1183) 00:21:41.842 2.920 - 2.933: 36.5275% ( 1167) 00:21:41.842 2.933 - 2.947: 42.5291% ( 1252) 00:21:41.842 2.947 - 2.960: 48.3294% ( 1210) 00:21:41.842 2.960 - 2.973: 53.4107% ( 1060) 00:21:41.842 2.973 - 2.987: 60.2368% ( 1424) 00:21:41.842 2.987 - 3.000: 70.4952% ( 2140) 00:21:41.842 3.000 - 3.013: 79.9578% ( 1974) 00:21:41.842 3.013 - 3.027: 87.3256% ( 1537) 00:21:41.842 3.027 - 3.040: 92.8862% ( 1160) 00:21:41.842 3.040 - 3.053: 96.1363% ( 678) 00:21:41.842 3.053 - 3.067: 98.2024% ( 431) 00:21:41.842 3.067 - 3.080: 99.1563% ( 199) 00:21:41.842 3.080 - 3.093: 99.4679% ( 65) 00:21:41.842 3.093 - 3.107: 99.5494% ( 17) 00:21:41.842 3.107 - 3.120: 99.6021% ( 11) 00:21:41.842 3.120 - 3.133: 99.6213% ( 4) 00:21:41.842 3.133 - 3.147: 99.6261% ( 1) 00:21:41.842 3.147 - 3.160: 99.6405% ( 3) 00:21:41.842 3.160 - 3.173: 99.6453% ( 1) 00:21:41.842 3.173 - 3.187: 99.6501% ( 1) 00:21:41.842 3.187 - 3.200: 99.6549% ( 1) 00:21:41.842 3.267 - 3.280: 99.6597% ( 1) 00:21:41.842 3.440 - 3.467: 99.6644% ( 1) 00:21:41.842 3.467 - 3.493: 99.6692% ( 1) 00:21:41.842 3.547 - 3.573: 99.6740% ( 1) 00:21:41.842 3.707 - 3.733: 99.6788% ( 1) 00:21:41.842 3.840 - 3.867: 99.6836% ( 1) 00:21:41.842 4.027 - 4.053: 99.6884% ( 1) 00:21:41.842 4.320 - 4.347: 99.6932% ( 1) 00:21:41.842 4.400 - 4.427: 99.6980% ( 1) 00:21:41.842 4.427 - 4.453: 99.7028% ( 1) 00:21:41.843 4.480 - 4.507: 99.7076% ( 1) 00:21:41.843 4.507 - 4.533: 99.7172% ( 2) 00:21:41.843 4.533 - 4.560: 99.7220% ( 1) 00:21:41.843 4.560 - 4.587: 99.7268% ( 1) 00:21:41.843 4.613 - 4.640: 99.7316% ( 1) 00:21:41.843 4.667 - 4.693: 99.7364% ( 1) 00:21:41.843 4.853 - 4.880: 99.7411% ( 1) 00:21:41.843 4.907 - 4.933: 99.7507% ( 2) 00:21:41.843 4.933 - 4.960: 99.7603% ( 2) 00:21:41.843 4.987 - 5.013: 99.7699% ( 2) 00:21:41.843 5.013 - 5.040: 99.7891% ( 4) 00:21:41.843 5.040 - 5.067: 99.8083% ( 4) 00:21:41.843 5.067 - 5.093: 99.8178% ( 2) 00:21:41.843 5.120 - 5.147: 99.8226% ( 1) 00:21:41.843 5.173 - 5.200: 99.8274% ( 1) 00:21:41.843 5.227 - 5.253: 99.8370% ( 2) 00:21:41.843 5.307 - 5.333: 99.8418% ( 1) 00:21:41.843 5.360 - 5.387: 99.8514% ( 2) 00:21:41.843 5.387 - 5.413: 99.8562% ( 1) 00:21:41.843 5.413 - 5.440: 99.8610% ( 1) 00:21:41.843 5.467 - 5.493: 99.8706% ( 2) 00:21:41.843 5.520 - 5.547: 99.8754% ( 1) 00:21:41.843 5.547 - 5.573: 99.8802% ( 1) 00:21:41.843 5.573 - 5.600: 99.8850% ( 1) 00:21:41.843 5.627 - 5.653: 99.8897% ( 1) 00:21:41.843 5.680 - 5.707: 99.9041% ( 3) 00:21:41.843 5.787 - 5.813: 99.9089% ( 1) 00:21:41.843 5.813 - 5.840: 99.9137% ( 1) 00:21:41.843 5.893 - 5.920: 99.9185% ( 1) 00:21:41.843 5.973 - 6.000: 99.9233% ( 1) 00:21:41.843 6.080 - 6.107: 99.9281% ( 1) 00:21:41.843 6.987 - 7.040: 99.9329% ( 1) 00:21:41.843 8.267 - 8.320: 99.9377% ( 1) 00:21:41.843 8.640 - 8.693: 99.9425% ( 1) 00:21:41.843 3986.773 - 4014.080: 100.0000% ( 12) 00:21:41.843 00:21:41.843 Complete histogram 00:21:41.843 ================== 00:21:41.843 Range in us Cumulative Count 00:21:41.843 1.627 - 1.633: 0.0048% ( 1) 00:21:41.843 1.633 - 1.640: 0.0719% ( 14) 00:21:41.843 1.640 - 1.647: 1.0594% ( 206) 00:21:41.843 1.647 - [2024-11-20 17:48:41.407651] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:41.843 1.653: 1.1505% ( 19) 00:21:41.843 1.653 - 1.660: 1.2559% ( 22) 00:21:41.843 1.660 - 1.667: 1.3614% ( 22) 00:21:41.843 1.667 - 1.673: 1.4621% ( 21) 00:21:41.843 1.673 - 1.680: 45.6642% ( 9221) 00:21:41.843 1.680 - 1.687: 54.5803% ( 1860) 00:21:41.843 1.687 - 1.693: 60.0978% ( 1151) 00:21:41.843 1.693 - 1.700: 74.0616% ( 2913) 00:21:41.843 1.700 - 1.707: 78.8121% ( 991) 00:21:41.843 1.707 - 1.720: 83.5722% ( 993) 00:21:41.843 1.720 - 1.733: 84.6076% ( 216) 00:21:41.843 1.733 - 1.747: 88.4282% ( 797) 00:21:41.843 1.747 - 1.760: 93.9121% ( 1144) 00:21:41.843 1.760 - 1.773: 97.4881% ( 746) 00:21:41.843 1.773 - 1.787: 99.0269% ( 321) 00:21:41.843 1.787 - 1.800: 99.4344% ( 85) 00:21:41.843 1.800 - 1.813: 99.4679% ( 7) 00:21:41.843 3.133 - 3.147: 99.4727% ( 1) 00:21:41.843 3.187 - 3.200: 99.4775% ( 1) 00:21:41.843 3.227 - 3.240: 99.4823% ( 1) 00:21:41.843 3.253 - 3.267: 99.4871% ( 1) 00:21:41.843 3.440 - 3.467: 99.4919% ( 1) 00:21:41.843 3.520 - 3.547: 99.4967% ( 1) 00:21:41.843 3.547 - 3.573: 99.5063% ( 2) 00:21:41.843 3.573 - 3.600: 99.5110% ( 1) 00:21:41.843 3.627 - 3.653: 99.5206% ( 2) 00:21:41.843 3.653 - 3.680: 99.5254% ( 1) 00:21:41.843 3.787 - 3.813: 99.5350% ( 2) 00:21:41.843 3.840 - 3.867: 99.5398% ( 1) 00:21:41.843 3.893 - 3.920: 99.5446% ( 1) 00:21:41.843 3.920 - 3.947: 99.5494% ( 1) 00:21:41.843 3.947 - 3.973: 99.5542% ( 1) 00:21:41.843 4.000 - 4.027: 99.5590% ( 1) 00:21:41.843 4.080 - 4.107: 99.5686% ( 2) 00:21:41.843 4.107 - 4.133: 99.5734% ( 1) 00:21:41.843 4.133 - 4.160: 99.5830% ( 2) 00:21:41.843 4.240 - 4.267: 99.5877% ( 1) 00:21:41.843 4.267 - 4.293: 99.5925% ( 1) 00:21:41.843 4.320 - 4.347: 99.6021% ( 2) 00:21:41.843 4.400 - 4.427: 99.6069% ( 1) 00:21:41.843 4.480 - 4.507: 99.6117% ( 1) 00:21:41.843 4.587 - 4.613: 99.6213% ( 2) 00:21:41.843 4.613 - 4.640: 99.6261% ( 1) 00:21:41.843 5.013 - 5.040: 99.6309% ( 1) 00:21:41.843 7.947 - 8.000: 99.6357% ( 1) 00:21:41.843 34.560 - 34.773: 99.6405% ( 1) 00:21:41.843 1747.627 - 1761.280: 99.6453% ( 1) 00:21:41.843 2020.693 - 2034.347: 99.6501% ( 1) 00:21:41.843 2088.960 - 2102.613: 99.6549% ( 1) 00:21:41.843 3986.773 - 4014.080: 99.9856% ( 69) 00:21:41.843 5980.160 - 6007.467: 100.0000% ( 3) 00:21:41.843 00:21:41.843 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:21:41.843 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:21:41.843 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:21:41.843 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:21:41.843 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:21:41.843 [ 00:21:41.843 { 00:21:41.843 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:41.843 "subtype": "Discovery", 00:21:41.843 "listen_addresses": [], 00:21:41.843 "allow_any_host": true, 00:21:41.843 "hosts": [] 00:21:41.843 }, 00:21:41.843 { 00:21:41.843 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:21:41.843 "subtype": "NVMe", 00:21:41.843 "listen_addresses": [ 00:21:41.843 { 00:21:41.843 "trtype": "VFIOUSER", 00:21:41.843 "adrfam": "IPv4", 00:21:41.843 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:21:41.843 "trsvcid": "0" 00:21:41.843 } 00:21:41.843 ], 00:21:41.843 "allow_any_host": true, 00:21:41.843 "hosts": [], 00:21:41.843 "serial_number": "SPDK1", 00:21:41.843 "model_number": "SPDK bdev Controller", 00:21:41.843 "max_namespaces": 32, 00:21:41.843 "min_cntlid": 1, 00:21:41.843 "max_cntlid": 65519, 00:21:41.843 "namespaces": [ 00:21:41.843 { 00:21:41.843 "nsid": 1, 00:21:41.843 "bdev_name": "Malloc1", 00:21:41.843 "name": "Malloc1", 00:21:41.843 "nguid": "0DC2E3DD4CB64838AA4700100F6B9E03", 00:21:41.843 "uuid": "0dc2e3dd-4cb6-4838-aa47-00100f6b9e03" 00:21:41.843 }, 00:21:41.843 { 00:21:41.843 "nsid": 2, 00:21:41.843 "bdev_name": "Malloc3", 00:21:41.843 "name": "Malloc3", 00:21:41.843 "nguid": "EE21918E65B04BD5BF9E042595D36811", 00:21:41.843 "uuid": "ee21918e-65b0-4bd5-bf9e-042595d36811" 00:21:41.843 } 00:21:41.843 ] 00:21:41.843 }, 00:21:41.843 { 00:21:41.843 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:21:41.843 "subtype": "NVMe", 00:21:41.843 "listen_addresses": [ 00:21:41.843 { 00:21:41.843 "trtype": "VFIOUSER", 00:21:41.843 "adrfam": "IPv4", 00:21:41.843 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:21:41.843 "trsvcid": "0" 00:21:41.843 } 00:21:41.843 ], 00:21:41.843 "allow_any_host": true, 00:21:41.843 "hosts": [], 00:21:41.843 "serial_number": "SPDK2", 00:21:41.843 "model_number": "SPDK bdev Controller", 00:21:41.843 "max_namespaces": 32, 00:21:41.843 "min_cntlid": 1, 00:21:41.843 "max_cntlid": 65519, 00:21:41.843 "namespaces": [ 00:21:41.843 { 00:21:41.843 "nsid": 1, 00:21:41.843 "bdev_name": "Malloc2", 00:21:41.843 "name": "Malloc2", 00:21:41.843 "nguid": "85E894C7BACE40E58AEB8884BBDA68CA", 00:21:41.843 "uuid": "85e894c7-bace-40e5-8aeb-8884bbda68ca" 00:21:41.843 } 00:21:41.843 ] 00:21:41.843 } 00:21:41.843 ] 00:21:41.843 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:41.843 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:21:41.843 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2663043 00:21:41.843 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:21:41.843 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:21:41.843 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:41.843 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:41.843 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:21:41.843 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:21:41.843 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:21:42.104 [2024-11-20 17:48:41.766654] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:42.104 Malloc4 00:21:42.104 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:21:42.104 [2024-11-20 17:48:41.984142] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:42.104 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:21:42.104 Asynchronous Event Request test 00:21:42.104 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:21:42.104 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:21:42.104 Registering asynchronous event callbacks... 00:21:42.104 Starting namespace attribute notice tests for all controllers... 00:21:42.104 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:42.104 aer_cb - Changed Namespace 00:21:42.104 Cleaning up... 00:21:42.364 [ 00:21:42.365 { 00:21:42.365 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:42.365 "subtype": "Discovery", 00:21:42.365 "listen_addresses": [], 00:21:42.365 "allow_any_host": true, 00:21:42.365 "hosts": [] 00:21:42.365 }, 00:21:42.365 { 00:21:42.365 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:21:42.365 "subtype": "NVMe", 00:21:42.365 "listen_addresses": [ 00:21:42.365 { 00:21:42.365 "trtype": "VFIOUSER", 00:21:42.365 "adrfam": "IPv4", 00:21:42.365 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:21:42.365 "trsvcid": "0" 00:21:42.365 } 00:21:42.365 ], 00:21:42.365 "allow_any_host": true, 00:21:42.365 "hosts": [], 00:21:42.365 "serial_number": "SPDK1", 00:21:42.365 "model_number": "SPDK bdev Controller", 00:21:42.365 "max_namespaces": 32, 00:21:42.365 "min_cntlid": 1, 00:21:42.365 "max_cntlid": 65519, 00:21:42.365 "namespaces": [ 00:21:42.365 { 00:21:42.365 "nsid": 1, 00:21:42.365 "bdev_name": "Malloc1", 00:21:42.365 "name": "Malloc1", 00:21:42.365 "nguid": "0DC2E3DD4CB64838AA4700100F6B9E03", 00:21:42.365 "uuid": "0dc2e3dd-4cb6-4838-aa47-00100f6b9e03" 00:21:42.365 }, 00:21:42.365 { 00:21:42.365 "nsid": 2, 00:21:42.365 "bdev_name": "Malloc3", 00:21:42.365 "name": "Malloc3", 00:21:42.365 "nguid": "EE21918E65B04BD5BF9E042595D36811", 00:21:42.365 "uuid": "ee21918e-65b0-4bd5-bf9e-042595d36811" 00:21:42.365 } 00:21:42.365 ] 00:21:42.365 }, 00:21:42.365 { 00:21:42.365 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:21:42.365 "subtype": "NVMe", 00:21:42.365 "listen_addresses": [ 00:21:42.365 { 00:21:42.365 "trtype": "VFIOUSER", 00:21:42.365 "adrfam": "IPv4", 00:21:42.365 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:21:42.365 "trsvcid": "0" 00:21:42.365 } 00:21:42.365 ], 00:21:42.365 "allow_any_host": true, 00:21:42.365 "hosts": [], 00:21:42.365 "serial_number": "SPDK2", 00:21:42.365 "model_number": "SPDK bdev Controller", 00:21:42.365 "max_namespaces": 32, 00:21:42.365 "min_cntlid": 1, 00:21:42.365 "max_cntlid": 65519, 00:21:42.365 "namespaces": [ 00:21:42.365 { 00:21:42.365 "nsid": 1, 00:21:42.365 "bdev_name": "Malloc2", 00:21:42.365 "name": "Malloc2", 00:21:42.365 "nguid": "85E894C7BACE40E58AEB8884BBDA68CA", 00:21:42.365 "uuid": "85e894c7-bace-40e5-8aeb-8884bbda68ca" 00:21:42.365 }, 00:21:42.365 { 00:21:42.365 "nsid": 2, 00:21:42.365 "bdev_name": "Malloc4", 00:21:42.365 "name": "Malloc4", 00:21:42.365 "nguid": "266FBFBAD533419E905BA39D66DB55B4", 00:21:42.365 "uuid": "266fbfba-d533-419e-905b-a39d66db55b4" 00:21:42.365 } 00:21:42.365 ] 00:21:42.365 } 00:21:42.365 ] 00:21:42.365 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2663043 00:21:42.365 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:21:42.365 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2653763 00:21:42.365 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2653763 ']' 00:21:42.365 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2653763 00:21:42.365 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:21:42.365 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.365 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2653763 00:21:42.365 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:42.365 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:42.365 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2653763' 00:21:42.365 killing process with pid 2653763 00:21:42.365 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2653763 00:21:42.365 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2653763 00:21:42.626 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:21:42.626 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:21:42.626 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:21:42.626 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:21:42.626 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:21:42.626 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2663306 00:21:42.626 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2663306' 00:21:42.626 Process pid: 2663306 00:21:42.626 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:21:42.626 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:21:42.626 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2663306 00:21:42.626 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2663306 ']' 00:21:42.626 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.626 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:42.626 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.626 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:42.626 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:21:42.626 [2024-11-20 17:48:42.476865] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:21:42.626 [2024-11-20 17:48:42.477800] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:42.626 [2024-11-20 17:48:42.477845] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.886 [2024-11-20 17:48:42.555809] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:42.886 [2024-11-20 17:48:42.585144] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.886 [2024-11-20 17:48:42.585183] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.886 [2024-11-20 17:48:42.585189] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.886 [2024-11-20 17:48:42.585194] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.886 [2024-11-20 17:48:42.585198] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.886 [2024-11-20 17:48:42.585395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.886 [2024-11-20 17:48:42.585600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.886 [2024-11-20 17:48:42.585711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.886 [2024-11-20 17:48:42.585711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.886 [2024-11-20 17:48:42.641150] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:21:42.886 [2024-11-20 17:48:42.642270] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:21:42.887 [2024-11-20 17:48:42.643086] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:21:42.887 [2024-11-20 17:48:42.643658] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:21:42.887 [2024-11-20 17:48:42.643699] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:21:43.458 17:48:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.458 17:48:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:21:43.458 17:48:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:21:44.401 17:48:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:21:44.660 17:48:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:21:44.660 17:48:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:21:44.660 17:48:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:44.660 17:48:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:21:44.660 17:48:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:44.919 Malloc1 00:21:44.919 17:48:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:21:45.180 17:48:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:21:45.180 17:48:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:21:45.441 17:48:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:45.441 17:48:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:21:45.441 17:48:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:21:45.702 Malloc2 00:21:45.702 17:48:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:21:45.962 17:48:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:21:45.962 17:48:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:21:46.223 17:48:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:21:46.223 17:48:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2663306 00:21:46.223 17:48:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2663306 ']' 00:21:46.223 17:48:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2663306 00:21:46.223 17:48:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:21:46.223 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:46.223 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2663306 00:21:46.223 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:46.223 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:46.223 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2663306' 00:21:46.223 killing process with pid 2663306 00:21:46.223 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2663306 00:21:46.223 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2663306 00:21:46.483 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:21:46.483 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:21:46.483 00:21:46.484 real 0m50.751s 00:21:46.484 user 3m14.422s 00:21:46.484 sys 0m2.750s 00:21:46.484 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:46.484 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:21:46.484 ************************************ 00:21:46.484 END TEST nvmf_vfio_user 00:21:46.484 ************************************ 00:21:46.484 17:48:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:21:46.484 17:48:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:46.484 17:48:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:46.484 17:48:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:46.484 ************************************ 00:21:46.484 START TEST nvmf_vfio_user_nvme_compliance 00:21:46.484 ************************************ 00:21:46.484 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:21:46.484 * Looking for test storage... 00:21:46.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:21:46.484 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:46.484 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:21:46.484 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.745 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:46.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.746 --rc genhtml_branch_coverage=1 00:21:46.746 --rc genhtml_function_coverage=1 00:21:46.746 --rc genhtml_legend=1 00:21:46.746 --rc geninfo_all_blocks=1 00:21:46.746 --rc geninfo_unexecuted_blocks=1 00:21:46.746 00:21:46.746 ' 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:46.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.746 --rc genhtml_branch_coverage=1 00:21:46.746 --rc genhtml_function_coverage=1 00:21:46.746 --rc genhtml_legend=1 00:21:46.746 --rc geninfo_all_blocks=1 00:21:46.746 --rc geninfo_unexecuted_blocks=1 00:21:46.746 00:21:46.746 ' 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:46.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.746 --rc genhtml_branch_coverage=1 00:21:46.746 --rc genhtml_function_coverage=1 00:21:46.746 --rc genhtml_legend=1 00:21:46.746 --rc geninfo_all_blocks=1 00:21:46.746 --rc geninfo_unexecuted_blocks=1 00:21:46.746 00:21:46.746 ' 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:46.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.746 --rc genhtml_branch_coverage=1 00:21:46.746 --rc genhtml_function_coverage=1 00:21:46.746 --rc genhtml_legend=1 00:21:46.746 --rc geninfo_all_blocks=1 00:21:46.746 --rc geninfo_unexecuted_blocks=1 00:21:46.746 00:21:46.746 ' 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:46.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2664051 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2664051' 00:21:46.746 Process pid: 2664051 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2664051 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 2664051 ']' 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:46.746 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:46.746 [2024-11-20 17:48:46.537555] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:46.746 [2024-11-20 17:48:46.537631] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.746 [2024-11-20 17:48:46.619738] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:46.747 [2024-11-20 17:48:46.649333] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.747 [2024-11-20 17:48:46.649367] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.747 [2024-11-20 17:48:46.649373] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.747 [2024-11-20 17:48:46.649378] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.747 [2024-11-20 17:48:46.649382] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.747 [2024-11-20 17:48:46.649525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.747 [2024-11-20 17:48:46.649637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.747 [2024-11-20 17:48:46.649644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.689 17:48:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:47.689 17:48:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:21:47.689 17:48:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:21:48.632 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:21:48.632 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:21:48.632 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:48.633 malloc0 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.633 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:21:48.633 00:21:48.633 00:21:48.633 CUnit - A unit testing framework for C - Version 2.1-3 00:21:48.633 http://cunit.sourceforge.net/ 00:21:48.633 00:21:48.633 00:21:48.633 Suite: nvme_compliance 00:21:48.893 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 17:48:48.571136] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:48.893 [2024-11-20 17:48:48.572430] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:21:48.893 [2024-11-20 17:48:48.572442] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:21:48.893 [2024-11-20 17:48:48.572447] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:21:48.893 [2024-11-20 17:48:48.574150] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:48.893 passed 00:21:48.893 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 17:48:48.649638] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:48.893 [2024-11-20 17:48:48.653672] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:48.893 passed 00:21:48.893 Test: admin_identify_ns ...[2024-11-20 17:48:48.730191] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:48.893 [2024-11-20 17:48:48.791168] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:21:48.893 [2024-11-20 17:48:48.799170] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:21:49.154 [2024-11-20 17:48:48.820246] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:49.154 passed 00:21:49.154 Test: admin_get_features_mandatory_features ...[2024-11-20 17:48:48.896293] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:49.154 [2024-11-20 17:48:48.899315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:49.154 passed 00:21:49.154 Test: admin_get_features_optional_features ...[2024-11-20 17:48:48.977818] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:49.154 [2024-11-20 17:48:48.980836] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:49.154 passed 00:21:49.154 Test: admin_set_features_number_of_queues ...[2024-11-20 17:48:49.055572] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:49.414 [2024-11-20 17:48:49.161238] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:49.414 passed 00:21:49.414 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 17:48:49.234406] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:49.414 [2024-11-20 17:48:49.237421] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:49.414 passed 00:21:49.415 Test: admin_get_log_page_with_lpo ...[2024-11-20 17:48:49.312514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:49.676 [2024-11-20 17:48:49.384168] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:21:49.676 [2024-11-20 17:48:49.397212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:49.676 passed 00:21:49.676 Test: fabric_property_get ...[2024-11-20 17:48:49.469404] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:49.676 [2024-11-20 17:48:49.470602] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:21:49.676 [2024-11-20 17:48:49.472419] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:49.676 passed 00:21:49.676 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 17:48:49.550892] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:49.676 [2024-11-20 17:48:49.552094] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:21:49.676 [2024-11-20 17:48:49.553909] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:49.676 passed 00:21:49.935 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 17:48:49.627613] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:49.935 [2024-11-20 17:48:49.712165] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:21:49.935 [2024-11-20 17:48:49.728167] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:21:49.935 [2024-11-20 17:48:49.733235] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:49.935 passed 00:21:49.935 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 17:48:49.806424] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:49.935 [2024-11-20 17:48:49.807622] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:21:49.935 [2024-11-20 17:48:49.809443] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:49.935 passed 00:21:50.194 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 17:48:49.886174] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:50.194 [2024-11-20 17:48:49.963165] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:21:50.194 [2024-11-20 17:48:49.987165] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:21:50.194 [2024-11-20 17:48:49.992224] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:50.194 passed 00:21:50.194 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 17:48:50.067434] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:50.195 [2024-11-20 17:48:50.068642] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:21:50.195 [2024-11-20 17:48:50.068660] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:21:50.195 [2024-11-20 17:48:50.070454] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:50.195 passed 00:21:50.455 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 17:48:50.146168] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:50.455 [2024-11-20 17:48:50.240166] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:21:50.455 [2024-11-20 17:48:50.248169] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:21:50.455 [2024-11-20 17:48:50.256170] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:21:50.455 [2024-11-20 17:48:50.264168] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:21:50.455 [2024-11-20 17:48:50.293238] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:50.455 passed 00:21:50.455 Test: admin_create_io_sq_verify_pc ...[2024-11-20 17:48:50.366457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:50.715 [2024-11-20 17:48:50.383173] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:21:50.715 [2024-11-20 17:48:50.400623] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:50.715 passed 00:21:50.715 Test: admin_create_io_qp_max_qps ...[2024-11-20 17:48:50.477063] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:52.100 [2024-11-20 17:48:51.592165] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:21:52.100 [2024-11-20 17:48:51.982452] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:52.100 passed 00:21:52.361 Test: admin_create_io_sq_shared_cq ...[2024-11-20 17:48:52.056530] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:52.361 [2024-11-20 17:48:52.188165] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:21:52.361 [2024-11-20 17:48:52.225213] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:52.361 passed 00:21:52.361 00:21:52.361 Run Summary: Type Total Ran Passed Failed Inactive 00:21:52.361 suites 1 1 n/a 0 0 00:21:52.361 tests 18 18 18 0 0 00:21:52.361 asserts 360 360 360 0 n/a 00:21:52.361 00:21:52.361 Elapsed time = 1.503 seconds 00:21:52.361 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2664051 00:21:52.361 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 2664051 ']' 00:21:52.361 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 2664051 00:21:52.361 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:21:52.622 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:52.622 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2664051 00:21:52.622 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:52.622 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:52.622 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2664051' 00:21:52.622 killing process with pid 2664051 00:21:52.622 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 2664051 00:21:52.622 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 2664051 00:21:52.622 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:21:52.622 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:21:52.622 00:21:52.622 real 0m6.216s 00:21:52.622 user 0m17.607s 00:21:52.622 sys 0m0.565s 00:21:52.622 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:52.622 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:52.622 ************************************ 00:21:52.622 END TEST nvmf_vfio_user_nvme_compliance 00:21:52.622 ************************************ 00:21:52.622 17:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:21:52.622 17:48:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:52.622 17:48:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:52.622 17:48:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:52.622 ************************************ 00:21:52.622 START TEST nvmf_vfio_user_fuzz 00:21:52.622 ************************************ 00:21:52.622 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:21:52.883 * Looking for test storage... 00:21:52.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:52.883 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:52.883 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:21:52.883 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:52.883 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:52.883 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:52.883 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:52.883 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:52.883 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:21:52.883 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:21:52.883 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:52.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.884 --rc genhtml_branch_coverage=1 00:21:52.884 --rc genhtml_function_coverage=1 00:21:52.884 --rc genhtml_legend=1 00:21:52.884 --rc geninfo_all_blocks=1 00:21:52.884 --rc geninfo_unexecuted_blocks=1 00:21:52.884 00:21:52.884 ' 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:52.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.884 --rc genhtml_branch_coverage=1 00:21:52.884 --rc genhtml_function_coverage=1 00:21:52.884 --rc genhtml_legend=1 00:21:52.884 --rc geninfo_all_blocks=1 00:21:52.884 --rc geninfo_unexecuted_blocks=1 00:21:52.884 00:21:52.884 ' 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:52.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.884 --rc genhtml_branch_coverage=1 00:21:52.884 --rc genhtml_function_coverage=1 00:21:52.884 --rc genhtml_legend=1 00:21:52.884 --rc geninfo_all_blocks=1 00:21:52.884 --rc geninfo_unexecuted_blocks=1 00:21:52.884 00:21:52.884 ' 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:52.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.884 --rc genhtml_branch_coverage=1 00:21:52.884 --rc genhtml_function_coverage=1 00:21:52.884 --rc genhtml_legend=1 00:21:52.884 --rc geninfo_all_blocks=1 00:21:52.884 --rc geninfo_unexecuted_blocks=1 00:21:52.884 00:21:52.884 ' 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:52.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:21:52.884 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:21:52.885 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:21:52.885 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2665431 00:21:52.885 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2665431' 00:21:52.885 Process pid: 2665431 00:21:52.885 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:21:52.885 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2665431 00:21:52.885 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:52.885 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2665431 ']' 00:21:52.885 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.885 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:52.885 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.885 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:52.885 17:48:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:53.825 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:53.825 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:21:53.825 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:21:54.765 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:21:54.765 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.765 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:54.765 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.765 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:21:54.765 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:21:54.765 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.765 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:54.765 malloc0 00:21:54.765 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.765 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:21:54.765 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.765 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:54.765 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.765 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:21:54.765 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.765 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:54.765 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.766 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:21:54.766 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.766 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:54.766 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.766 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:21:54.766 17:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:22:26.875 Fuzzing completed. Shutting down the fuzz application 00:22:26.875 00:22:26.875 Dumping successful admin opcodes: 00:22:26.875 8, 9, 10, 24, 00:22:26.875 Dumping successful io opcodes: 00:22:26.875 0, 00:22:26.875 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1421680, total successful commands: 5587, random_seed: 2794249792 00:22:26.875 NS: 0x200003a1ef00 admin qp, Total commands completed: 353239, total successful commands: 2844, random_seed: 1361908544 00:22:26.875 17:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:22:26.875 17:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.875 17:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:26.875 17:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.875 17:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2665431 00:22:26.875 17:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2665431 ']' 00:22:26.875 17:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 2665431 00:22:26.875 17:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:22:26.875 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.875 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2665431 00:22:26.875 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:26.875 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:26.875 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2665431' 00:22:26.875 killing process with pid 2665431 00:22:26.875 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 2665431 00:22:26.875 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 2665431 00:22:26.875 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:22:26.875 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:22:26.875 00:22:26.875 real 0m32.799s 00:22:26.875 user 0m37.678s 00:22:26.875 sys 0m24.540s 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:26.876 ************************************ 00:22:26.876 END TEST nvmf_vfio_user_fuzz 00:22:26.876 ************************************ 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:26.876 ************************************ 00:22:26.876 START TEST nvmf_auth_target 00:22:26.876 ************************************ 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:26.876 * Looking for test storage... 00:22:26.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:26.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.876 --rc genhtml_branch_coverage=1 00:22:26.876 --rc genhtml_function_coverage=1 00:22:26.876 --rc genhtml_legend=1 00:22:26.876 --rc geninfo_all_blocks=1 00:22:26.876 --rc geninfo_unexecuted_blocks=1 00:22:26.876 00:22:26.876 ' 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:26.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.876 --rc genhtml_branch_coverage=1 00:22:26.876 --rc genhtml_function_coverage=1 00:22:26.876 --rc genhtml_legend=1 00:22:26.876 --rc geninfo_all_blocks=1 00:22:26.876 --rc geninfo_unexecuted_blocks=1 00:22:26.876 00:22:26.876 ' 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:26.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.876 --rc genhtml_branch_coverage=1 00:22:26.876 --rc genhtml_function_coverage=1 00:22:26.876 --rc genhtml_legend=1 00:22:26.876 --rc geninfo_all_blocks=1 00:22:26.876 --rc geninfo_unexecuted_blocks=1 00:22:26.876 00:22:26.876 ' 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:26.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.876 --rc genhtml_branch_coverage=1 00:22:26.876 --rc genhtml_function_coverage=1 00:22:26.876 --rc genhtml_legend=1 00:22:26.876 --rc geninfo_all_blocks=1 00:22:26.876 --rc geninfo_unexecuted_blocks=1 00:22:26.876 00:22:26.876 ' 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.876 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.877 17:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:33.465 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:33.465 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:33.465 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:33.465 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # is_hw=yes 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:33.465 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.465 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.465 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.465 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:33.465 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:33.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:22:33.465 00:22:33.465 --- 10.0.0.2 ping statistics --- 00:22:33.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.465 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:22:33.465 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:22:33.466 00:22:33.466 --- 10.0.0.1 ping statistics --- 00:22:33.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.466 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # return 0 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=2675284 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 2675284 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2675284 ']' 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:33.466 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.410 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:34.410 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:34.410 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:34.410 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:34.410 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2675436 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=146c98c1f32d6aa452547d4554c2b38f933adb81384165fe 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.U5E 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 146c98c1f32d6aa452547d4554c2b38f933adb81384165fe 0 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 146c98c1f32d6aa452547d4554c2b38f933adb81384165fe 0 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=146c98c1f32d6aa452547d4554c2b38f933adb81384165fe 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.U5E 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.U5E 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.U5E 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=7ede2c95c31e67376b9512eeedd30774351664cfa5b1e29438e4319896a31369 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.yhC 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 7ede2c95c31e67376b9512eeedd30774351664cfa5b1e29438e4319896a31369 3 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 7ede2c95c31e67376b9512eeedd30774351664cfa5b1e29438e4319896a31369 3 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=7ede2c95c31e67376b9512eeedd30774351664cfa5b1e29438e4319896a31369 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.yhC 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.yhC 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.yhC 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=7545f84efb6179964d4d35d5c5b0594a 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.idm 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 7545f84efb6179964d4d35d5c5b0594a 1 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 7545f84efb6179964d4d35d5c5b0594a 1 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=7545f84efb6179964d4d35d5c5b0594a 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.idm 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.idm 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.idm 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=c6763af31c2768234e2248ec5d6f17e04325eeae1e79726e 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.tEP 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key c6763af31c2768234e2248ec5d6f17e04325eeae1e79726e 2 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 c6763af31c2768234e2248ec5d6f17e04325eeae1e79726e 2 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=c6763af31c2768234e2248ec5d6f17e04325eeae1e79726e 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.tEP 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.tEP 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.tEP 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:22:34.410 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=662171882c7f03d5f7c425ac960f3db1d9363f9693bf540a 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.sp1 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 662171882c7f03d5f7c425ac960f3db1d9363f9693bf540a 2 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 662171882c7f03d5f7c425ac960f3db1d9363f9693bf540a 2 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=662171882c7f03d5f7c425ac960f3db1d9363f9693bf540a 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.sp1 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.sp1 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.sp1 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=c62d3b6bb456f3edc28c9156bdd7ad1d 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.YDn 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key c62d3b6bb456f3edc28c9156bdd7ad1d 1 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 c62d3b6bb456f3edc28c9156bdd7ad1d 1 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=c62d3b6bb456f3edc28c9156bdd7ad1d 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.YDn 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.YDn 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.YDn 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=679c416f1ebc1b9f0d86bc6562318570e3545e88e3174d48c390976458032789 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.wwr 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 679c416f1ebc1b9f0d86bc6562318570e3545e88e3174d48c390976458032789 3 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 679c416f1ebc1b9f0d86bc6562318570e3545e88e3174d48c390976458032789 3 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=679c416f1ebc1b9f0d86bc6562318570e3545e88e3174d48c390976458032789 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.wwr 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.wwr 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.wwr 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2675284 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2675284 ']' 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:34.673 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.674 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:34.674 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.934 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:34.934 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:34.934 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2675436 /var/tmp/host.sock 00:22:34.934 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2675436 ']' 00:22:34.934 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:22:34.934 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:34.934 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:22:34.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:22:34.934 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:34.934 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.195 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:35.195 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:35.195 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:22:35.195 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.195 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.195 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.195 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:35.195 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.U5E 00:22:35.195 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.195 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.195 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.195 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.U5E 00:22:35.195 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.U5E 00:22:35.195 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.yhC ]] 00:22:35.195 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yhC 00:22:35.195 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.195 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.456 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.456 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yhC 00:22:35.456 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yhC 00:22:35.456 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:35.456 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.idm 00:22:35.456 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.456 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.456 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.456 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.idm 00:22:35.456 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.idm 00:22:35.716 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.tEP ]] 00:22:35.716 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tEP 00:22:35.716 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.716 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.716 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.716 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tEP 00:22:35.716 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tEP 00:22:35.977 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:35.977 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.sp1 00:22:35.977 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.977 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.977 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.977 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.sp1 00:22:35.977 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.sp1 00:22:35.977 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.YDn ]] 00:22:35.977 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YDn 00:22:35.977 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.977 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.977 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.977 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YDn 00:22:35.977 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YDn 00:22:36.238 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:36.238 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.wwr 00:22:36.238 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.238 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.238 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.238 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.wwr 00:22:36.238 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.wwr 00:22:36.499 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.500 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.760 00:22:36.760 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:36.760 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:36.760 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.021 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.021 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.021 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.021 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.021 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.021 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.021 { 00:22:37.021 "cntlid": 1, 00:22:37.021 "qid": 0, 00:22:37.021 "state": "enabled", 00:22:37.021 "thread": "nvmf_tgt_poll_group_000", 00:22:37.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:37.021 "listen_address": { 00:22:37.021 "trtype": "TCP", 00:22:37.021 "adrfam": "IPv4", 00:22:37.021 "traddr": "10.0.0.2", 00:22:37.021 "trsvcid": "4420" 00:22:37.021 }, 00:22:37.021 "peer_address": { 00:22:37.021 "trtype": "TCP", 00:22:37.021 "adrfam": "IPv4", 00:22:37.021 "traddr": "10.0.0.1", 00:22:37.021 "trsvcid": "54902" 00:22:37.021 }, 00:22:37.021 "auth": { 00:22:37.021 "state": "completed", 00:22:37.021 "digest": "sha256", 00:22:37.021 "dhgroup": "null" 00:22:37.021 } 00:22:37.021 } 00:22:37.021 ]' 00:22:37.021 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.021 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:37.282 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.282 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:37.282 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.282 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.282 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.282 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.542 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:22:37.542 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:22:38.112 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.112 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:38.112 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.112 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.112 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.112 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:38.112 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:38.112 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:38.372 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:22:38.372 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:38.372 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:38.372 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:38.372 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:38.372 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.372 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.372 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.372 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.372 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.372 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.372 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.372 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.633 00:22:38.633 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.633 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.633 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.633 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.633 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.633 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.633 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.633 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.633 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.633 { 00:22:38.633 "cntlid": 3, 00:22:38.633 "qid": 0, 00:22:38.633 "state": "enabled", 00:22:38.633 "thread": "nvmf_tgt_poll_group_000", 00:22:38.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:38.633 "listen_address": { 00:22:38.633 "trtype": "TCP", 00:22:38.633 "adrfam": "IPv4", 00:22:38.633 "traddr": "10.0.0.2", 00:22:38.633 "trsvcid": "4420" 00:22:38.633 }, 00:22:38.633 "peer_address": { 00:22:38.633 "trtype": "TCP", 00:22:38.633 "adrfam": "IPv4", 00:22:38.633 "traddr": "10.0.0.1", 00:22:38.633 "trsvcid": "54926" 00:22:38.633 }, 00:22:38.633 "auth": { 00:22:38.633 "state": "completed", 00:22:38.633 "digest": "sha256", 00:22:38.633 "dhgroup": "null" 00:22:38.633 } 00:22:38.633 } 00:22:38.633 ]' 00:22:38.633 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.893 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:38.893 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.893 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:38.893 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.893 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.893 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.893 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.154 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:22:39.154 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:22:39.727 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.727 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:39.727 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.727 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.727 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.727 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.727 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:39.727 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:39.987 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:22:39.987 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.987 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:39.987 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:39.987 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:39.987 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.987 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.987 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.987 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.987 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.987 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.987 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.987 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.987 00:22:40.247 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.247 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.247 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.247 17:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.247 17:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.247 17:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.247 17:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.247 17:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.247 17:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.247 { 00:22:40.247 "cntlid": 5, 00:22:40.247 "qid": 0, 00:22:40.247 "state": "enabled", 00:22:40.247 "thread": "nvmf_tgt_poll_group_000", 00:22:40.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:40.247 "listen_address": { 00:22:40.247 "trtype": "TCP", 00:22:40.247 "adrfam": "IPv4", 00:22:40.247 "traddr": "10.0.0.2", 00:22:40.247 "trsvcid": "4420" 00:22:40.247 }, 00:22:40.247 "peer_address": { 00:22:40.247 "trtype": "TCP", 00:22:40.247 "adrfam": "IPv4", 00:22:40.247 "traddr": "10.0.0.1", 00:22:40.247 "trsvcid": "54954" 00:22:40.247 }, 00:22:40.247 "auth": { 00:22:40.247 "state": "completed", 00:22:40.247 "digest": "sha256", 00:22:40.247 "dhgroup": "null" 00:22:40.247 } 00:22:40.247 } 00:22:40.247 ]' 00:22:40.247 17:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.247 17:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:40.508 17:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.508 17:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:40.508 17:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.508 17:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.508 17:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.508 17:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.769 17:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:22:40.769 17:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:22:41.340 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.340 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:41.340 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.340 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.340 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.340 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.340 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:41.340 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:41.600 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:22:41.600 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.600 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:41.600 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:41.600 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:41.600 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.600 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:41.601 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.601 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.601 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.601 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:41.601 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:41.601 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:41.601 00:22:41.862 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.862 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.862 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.862 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.862 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.862 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.862 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.862 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.862 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:41.862 { 00:22:41.862 "cntlid": 7, 00:22:41.862 "qid": 0, 00:22:41.862 "state": "enabled", 00:22:41.862 "thread": "nvmf_tgt_poll_group_000", 00:22:41.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:41.862 "listen_address": { 00:22:41.862 "trtype": "TCP", 00:22:41.862 "adrfam": "IPv4", 00:22:41.862 "traddr": "10.0.0.2", 00:22:41.862 "trsvcid": "4420" 00:22:41.862 }, 00:22:41.862 "peer_address": { 00:22:41.862 "trtype": "TCP", 00:22:41.862 "adrfam": "IPv4", 00:22:41.862 "traddr": "10.0.0.1", 00:22:41.862 "trsvcid": "54996" 00:22:41.862 }, 00:22:41.862 "auth": { 00:22:41.862 "state": "completed", 00:22:41.862 "digest": "sha256", 00:22:41.862 "dhgroup": "null" 00:22:41.862 } 00:22:41.862 } 00:22:41.862 ]' 00:22:41.862 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:41.862 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:41.862 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.124 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:42.124 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.124 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.124 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.124 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.124 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:22:42.124 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:22:43.065 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.065 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:43.065 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.065 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.065 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.065 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:43.066 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.066 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:43.066 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:43.066 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:22:43.066 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:43.066 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:43.066 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:43.066 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:43.066 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.066 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.066 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.066 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.066 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.066 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.066 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.066 17:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.326 00:22:43.326 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.326 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.326 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.587 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.587 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.587 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.587 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.587 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.587 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.587 { 00:22:43.587 "cntlid": 9, 00:22:43.587 "qid": 0, 00:22:43.587 "state": "enabled", 00:22:43.587 "thread": "nvmf_tgt_poll_group_000", 00:22:43.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:43.587 "listen_address": { 00:22:43.587 "trtype": "TCP", 00:22:43.587 "adrfam": "IPv4", 00:22:43.587 "traddr": "10.0.0.2", 00:22:43.587 "trsvcid": "4420" 00:22:43.587 }, 00:22:43.587 "peer_address": { 00:22:43.587 "trtype": "TCP", 00:22:43.587 "adrfam": "IPv4", 00:22:43.587 "traddr": "10.0.0.1", 00:22:43.587 "trsvcid": "55022" 00:22:43.587 }, 00:22:43.587 "auth": { 00:22:43.587 "state": "completed", 00:22:43.587 "digest": "sha256", 00:22:43.587 "dhgroup": "ffdhe2048" 00:22:43.587 } 00:22:43.587 } 00:22:43.587 ]' 00:22:43.587 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.587 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:43.587 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.587 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:43.587 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.587 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.587 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.587 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.848 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:22:43.848 17:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:22:44.418 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.418 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:44.418 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.418 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.418 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.418 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.418 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:44.418 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:44.681 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:22:44.681 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.681 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:44.681 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:44.681 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:44.681 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.681 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.681 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.681 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.681 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.681 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.681 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.681 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.942 00:22:44.942 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:44.942 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.942 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.202 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.202 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.202 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.202 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.202 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.202 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:45.202 { 00:22:45.202 "cntlid": 11, 00:22:45.202 "qid": 0, 00:22:45.202 "state": "enabled", 00:22:45.202 "thread": "nvmf_tgt_poll_group_000", 00:22:45.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:45.202 "listen_address": { 00:22:45.202 "trtype": "TCP", 00:22:45.202 "adrfam": "IPv4", 00:22:45.202 "traddr": "10.0.0.2", 00:22:45.202 "trsvcid": "4420" 00:22:45.202 }, 00:22:45.202 "peer_address": { 00:22:45.202 "trtype": "TCP", 00:22:45.202 "adrfam": "IPv4", 00:22:45.202 "traddr": "10.0.0.1", 00:22:45.202 "trsvcid": "55054" 00:22:45.202 }, 00:22:45.202 "auth": { 00:22:45.202 "state": "completed", 00:22:45.202 "digest": "sha256", 00:22:45.202 "dhgroup": "ffdhe2048" 00:22:45.202 } 00:22:45.202 } 00:22:45.202 ]' 00:22:45.202 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:45.203 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:45.203 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:45.203 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:45.203 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:45.203 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.203 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.203 17:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.463 17:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:22:45.463 17:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:22:46.032 17:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.032 17:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:46.032 17:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.032 17:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.032 17:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.032 17:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.032 17:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:46.032 17:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:46.292 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:22:46.292 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.292 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:46.292 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:46.292 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:46.292 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.292 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.292 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.292 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.292 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.292 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.292 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.292 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.551 00:22:46.551 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.551 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.551 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.551 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.551 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.551 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.551 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.551 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.551 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.551 { 00:22:46.551 "cntlid": 13, 00:22:46.551 "qid": 0, 00:22:46.551 "state": "enabled", 00:22:46.551 "thread": "nvmf_tgt_poll_group_000", 00:22:46.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:46.551 "listen_address": { 00:22:46.551 "trtype": "TCP", 00:22:46.551 "adrfam": "IPv4", 00:22:46.551 "traddr": "10.0.0.2", 00:22:46.551 "trsvcid": "4420" 00:22:46.551 }, 00:22:46.551 "peer_address": { 00:22:46.551 "trtype": "TCP", 00:22:46.551 "adrfam": "IPv4", 00:22:46.551 "traddr": "10.0.0.1", 00:22:46.551 "trsvcid": "48374" 00:22:46.551 }, 00:22:46.551 "auth": { 00:22:46.551 "state": "completed", 00:22:46.551 "digest": "sha256", 00:22:46.551 "dhgroup": "ffdhe2048" 00:22:46.551 } 00:22:46.551 } 00:22:46.551 ]' 00:22:46.551 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.811 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:46.811 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.811 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:46.811 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.811 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.811 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.811 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.071 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:22:47.071 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:22:47.642 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.642 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:47.642 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.642 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.642 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.642 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:47.642 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:47.642 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:47.901 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:22:47.901 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.901 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:47.901 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:47.901 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:47.901 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.901 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:47.901 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.901 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.901 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.901 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:47.901 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:47.901 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:48.162 00:22:48.162 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.162 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.162 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.162 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.162 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.162 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.162 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.162 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.162 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.162 { 00:22:48.162 "cntlid": 15, 00:22:48.162 "qid": 0, 00:22:48.162 "state": "enabled", 00:22:48.162 "thread": "nvmf_tgt_poll_group_000", 00:22:48.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:48.162 "listen_address": { 00:22:48.162 "trtype": "TCP", 00:22:48.162 "adrfam": "IPv4", 00:22:48.162 "traddr": "10.0.0.2", 00:22:48.162 "trsvcid": "4420" 00:22:48.162 }, 00:22:48.162 "peer_address": { 00:22:48.162 "trtype": "TCP", 00:22:48.162 "adrfam": "IPv4", 00:22:48.162 "traddr": "10.0.0.1", 00:22:48.162 "trsvcid": "48404" 00:22:48.162 }, 00:22:48.162 "auth": { 00:22:48.162 "state": "completed", 00:22:48.162 "digest": "sha256", 00:22:48.162 "dhgroup": "ffdhe2048" 00:22:48.162 } 00:22:48.162 } 00:22:48.162 ]' 00:22:48.162 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.424 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:48.424 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.424 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:48.424 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.424 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.424 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.424 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.685 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:22:48.685 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:22:49.256 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.256 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:49.256 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.256 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.256 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.256 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:49.256 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.256 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:49.256 17:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:49.257 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:22:49.257 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.257 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:49.257 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:49.257 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:49.257 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.257 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.257 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.257 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.523 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.523 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.523 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.523 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.523 00:22:49.523 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.523 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.523 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.856 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.856 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.856 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.856 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.856 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.856 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.856 { 00:22:49.856 "cntlid": 17, 00:22:49.856 "qid": 0, 00:22:49.856 "state": "enabled", 00:22:49.856 "thread": "nvmf_tgt_poll_group_000", 00:22:49.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:49.856 "listen_address": { 00:22:49.856 "trtype": "TCP", 00:22:49.856 "adrfam": "IPv4", 00:22:49.856 "traddr": "10.0.0.2", 00:22:49.856 "trsvcid": "4420" 00:22:49.856 }, 00:22:49.856 "peer_address": { 00:22:49.856 "trtype": "TCP", 00:22:49.856 "adrfam": "IPv4", 00:22:49.856 "traddr": "10.0.0.1", 00:22:49.856 "trsvcid": "48420" 00:22:49.856 }, 00:22:49.856 "auth": { 00:22:49.856 "state": "completed", 00:22:49.856 "digest": "sha256", 00:22:49.856 "dhgroup": "ffdhe3072" 00:22:49.856 } 00:22:49.856 } 00:22:49.856 ]' 00:22:49.856 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:49.856 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:49.856 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:49.856 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:49.856 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:50.176 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.176 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.176 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.176 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:22:50.176 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:22:50.746 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.746 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:50.746 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.746 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.746 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.746 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:50.746 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:50.746 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:51.007 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:22:51.007 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.007 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:51.007 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:51.007 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:51.007 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.007 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.007 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.007 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.007 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.007 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.007 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.007 17:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.268 00:22:51.268 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.268 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.268 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.528 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.528 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.528 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.528 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.528 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.528 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.528 { 00:22:51.528 "cntlid": 19, 00:22:51.528 "qid": 0, 00:22:51.528 "state": "enabled", 00:22:51.528 "thread": "nvmf_tgt_poll_group_000", 00:22:51.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:51.528 "listen_address": { 00:22:51.528 "trtype": "TCP", 00:22:51.528 "adrfam": "IPv4", 00:22:51.528 "traddr": "10.0.0.2", 00:22:51.528 "trsvcid": "4420" 00:22:51.528 }, 00:22:51.528 "peer_address": { 00:22:51.528 "trtype": "TCP", 00:22:51.528 "adrfam": "IPv4", 00:22:51.528 "traddr": "10.0.0.1", 00:22:51.528 "trsvcid": "48442" 00:22:51.528 }, 00:22:51.528 "auth": { 00:22:51.528 "state": "completed", 00:22:51.528 "digest": "sha256", 00:22:51.528 "dhgroup": "ffdhe3072" 00:22:51.528 } 00:22:51.528 } 00:22:51.528 ]' 00:22:51.528 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:51.528 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:51.528 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.528 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:51.528 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:51.528 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.528 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.528 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.789 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:22:51.789 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:22:52.360 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.360 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:52.360 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.360 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.360 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.360 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:52.360 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:52.360 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:52.621 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:22:52.621 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:52.621 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:52.621 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:52.621 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:52.621 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.621 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.621 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.621 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.621 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.621 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.621 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.621 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.882 00:22:52.882 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:52.882 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:52.882 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.144 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.144 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.144 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.144 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.144 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.144 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.144 { 00:22:53.144 "cntlid": 21, 00:22:53.144 "qid": 0, 00:22:53.144 "state": "enabled", 00:22:53.144 "thread": "nvmf_tgt_poll_group_000", 00:22:53.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:53.144 "listen_address": { 00:22:53.144 "trtype": "TCP", 00:22:53.144 "adrfam": "IPv4", 00:22:53.144 "traddr": "10.0.0.2", 00:22:53.144 "trsvcid": "4420" 00:22:53.144 }, 00:22:53.144 "peer_address": { 00:22:53.144 "trtype": "TCP", 00:22:53.144 "adrfam": "IPv4", 00:22:53.144 "traddr": "10.0.0.1", 00:22:53.144 "trsvcid": "48484" 00:22:53.144 }, 00:22:53.144 "auth": { 00:22:53.144 "state": "completed", 00:22:53.144 "digest": "sha256", 00:22:53.144 "dhgroup": "ffdhe3072" 00:22:53.144 } 00:22:53.144 } 00:22:53.144 ]' 00:22:53.144 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.144 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:53.144 17:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.144 17:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:53.144 17:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.144 17:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.144 17:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.144 17:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.405 17:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:22:53.405 17:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:22:53.978 17:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.978 17:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.978 17:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.978 17:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.240 17:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.240 17:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.240 17:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:54.240 17:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:54.240 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:22:54.240 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:54.240 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:54.240 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:54.240 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:54.240 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.240 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:54.240 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.240 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.240 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.240 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:54.240 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:54.240 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:54.501 00:22:54.501 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:54.501 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:54.501 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.761 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.761 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.761 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.761 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.761 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.761 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:54.761 { 00:22:54.761 "cntlid": 23, 00:22:54.761 "qid": 0, 00:22:54.761 "state": "enabled", 00:22:54.761 "thread": "nvmf_tgt_poll_group_000", 00:22:54.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:54.761 "listen_address": { 00:22:54.761 "trtype": "TCP", 00:22:54.761 "adrfam": "IPv4", 00:22:54.761 "traddr": "10.0.0.2", 00:22:54.761 "trsvcid": "4420" 00:22:54.761 }, 00:22:54.761 "peer_address": { 00:22:54.761 "trtype": "TCP", 00:22:54.761 "adrfam": "IPv4", 00:22:54.761 "traddr": "10.0.0.1", 00:22:54.761 "trsvcid": "48506" 00:22:54.761 }, 00:22:54.761 "auth": { 00:22:54.761 "state": "completed", 00:22:54.761 "digest": "sha256", 00:22:54.761 "dhgroup": "ffdhe3072" 00:22:54.761 } 00:22:54.761 } 00:22:54.761 ]' 00:22:54.761 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:54.761 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:54.761 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:54.761 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:54.761 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:54.761 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.761 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.761 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.022 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:22:55.022 17:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:22:55.593 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.593 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:55.593 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.593 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.593 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.593 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:55.593 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:55.593 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:55.593 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:55.855 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:22:55.855 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:55.855 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:55.855 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:55.855 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:55.855 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.855 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.855 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.855 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.855 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.855 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.855 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.855 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.116 00:22:56.116 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:56.116 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:56.116 17:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.377 17:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.377 17:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.377 17:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.377 17:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.377 17:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.377 17:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:56.377 { 00:22:56.377 "cntlid": 25, 00:22:56.377 "qid": 0, 00:22:56.377 "state": "enabled", 00:22:56.377 "thread": "nvmf_tgt_poll_group_000", 00:22:56.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:56.377 "listen_address": { 00:22:56.377 "trtype": "TCP", 00:22:56.377 "adrfam": "IPv4", 00:22:56.377 "traddr": "10.0.0.2", 00:22:56.377 "trsvcid": "4420" 00:22:56.377 }, 00:22:56.377 "peer_address": { 00:22:56.377 "trtype": "TCP", 00:22:56.377 "adrfam": "IPv4", 00:22:56.377 "traddr": "10.0.0.1", 00:22:56.377 "trsvcid": "42914" 00:22:56.377 }, 00:22:56.377 "auth": { 00:22:56.377 "state": "completed", 00:22:56.377 "digest": "sha256", 00:22:56.377 "dhgroup": "ffdhe4096" 00:22:56.377 } 00:22:56.377 } 00:22:56.377 ]' 00:22:56.377 17:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:56.377 17:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:56.377 17:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:56.377 17:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:56.377 17:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:56.637 17:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.637 17:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.637 17:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.637 17:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:22:56.637 17:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:22:57.207 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.468 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.728 00:22:57.728 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:57.728 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:57.728 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.988 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.988 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.988 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.988 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.988 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.988 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:57.988 { 00:22:57.988 "cntlid": 27, 00:22:57.988 "qid": 0, 00:22:57.988 "state": "enabled", 00:22:57.988 "thread": "nvmf_tgt_poll_group_000", 00:22:57.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:57.988 "listen_address": { 00:22:57.988 "trtype": "TCP", 00:22:57.988 "adrfam": "IPv4", 00:22:57.988 "traddr": "10.0.0.2", 00:22:57.988 "trsvcid": "4420" 00:22:57.988 }, 00:22:57.988 "peer_address": { 00:22:57.988 "trtype": "TCP", 00:22:57.988 "adrfam": "IPv4", 00:22:57.988 "traddr": "10.0.0.1", 00:22:57.988 "trsvcid": "42942" 00:22:57.988 }, 00:22:57.988 "auth": { 00:22:57.988 "state": "completed", 00:22:57.988 "digest": "sha256", 00:22:57.988 "dhgroup": "ffdhe4096" 00:22:57.988 } 00:22:57.988 } 00:22:57.988 ]' 00:22:57.988 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:57.988 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:57.988 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:57.988 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:58.249 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:58.249 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.249 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.249 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.249 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:22:58.249 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.192 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.453 00:22:59.453 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:59.453 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:59.453 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.714 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.714 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.714 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.714 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.714 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.714 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:59.714 { 00:22:59.714 "cntlid": 29, 00:22:59.714 "qid": 0, 00:22:59.714 "state": "enabled", 00:22:59.714 "thread": "nvmf_tgt_poll_group_000", 00:22:59.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:59.714 "listen_address": { 00:22:59.714 "trtype": "TCP", 00:22:59.714 "adrfam": "IPv4", 00:22:59.714 "traddr": "10.0.0.2", 00:22:59.714 "trsvcid": "4420" 00:22:59.714 }, 00:22:59.714 "peer_address": { 00:22:59.714 "trtype": "TCP", 00:22:59.714 "adrfam": "IPv4", 00:22:59.714 "traddr": "10.0.0.1", 00:22:59.714 "trsvcid": "42962" 00:22:59.714 }, 00:22:59.714 "auth": { 00:22:59.714 "state": "completed", 00:22:59.714 "digest": "sha256", 00:22:59.714 "dhgroup": "ffdhe4096" 00:22:59.714 } 00:22:59.714 } 00:22:59.714 ]' 00:22:59.714 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:59.714 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:59.714 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.714 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:59.714 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.714 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.714 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.714 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.975 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:22:59.975 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:23:00.546 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.546 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:00.546 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.546 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.546 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.546 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:00.546 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:00.546 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:00.806 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:23:00.806 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.806 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:00.806 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:00.806 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:00.806 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.806 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:00.807 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.807 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.807 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.807 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:00.807 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:00.807 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:01.067 00:23:01.067 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:01.067 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:01.067 17:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.328 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.328 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.328 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.328 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.328 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.328 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:01.328 { 00:23:01.328 "cntlid": 31, 00:23:01.328 "qid": 0, 00:23:01.328 "state": "enabled", 00:23:01.328 "thread": "nvmf_tgt_poll_group_000", 00:23:01.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:01.328 "listen_address": { 00:23:01.328 "trtype": "TCP", 00:23:01.328 "adrfam": "IPv4", 00:23:01.328 "traddr": "10.0.0.2", 00:23:01.328 "trsvcid": "4420" 00:23:01.328 }, 00:23:01.328 "peer_address": { 00:23:01.328 "trtype": "TCP", 00:23:01.328 "adrfam": "IPv4", 00:23:01.328 "traddr": "10.0.0.1", 00:23:01.328 "trsvcid": "42988" 00:23:01.328 }, 00:23:01.328 "auth": { 00:23:01.328 "state": "completed", 00:23:01.328 "digest": "sha256", 00:23:01.328 "dhgroup": "ffdhe4096" 00:23:01.328 } 00:23:01.328 } 00:23:01.328 ]' 00:23:01.328 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:01.328 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:01.328 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:01.328 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:01.328 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:01.328 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.328 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.328 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.594 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:01.594 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:02.171 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.171 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:02.171 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.171 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.171 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.171 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:02.171 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:02.171 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:02.171 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:02.431 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:23:02.431 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:02.431 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:02.431 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:02.431 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:02.431 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.431 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.431 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.431 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.431 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.431 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.431 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.431 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.692 00:23:02.692 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:02.692 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:02.692 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.953 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.954 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.954 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.954 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.954 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.954 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:02.954 { 00:23:02.954 "cntlid": 33, 00:23:02.954 "qid": 0, 00:23:02.954 "state": "enabled", 00:23:02.954 "thread": "nvmf_tgt_poll_group_000", 00:23:02.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:02.954 "listen_address": { 00:23:02.954 "trtype": "TCP", 00:23:02.954 "adrfam": "IPv4", 00:23:02.954 "traddr": "10.0.0.2", 00:23:02.954 "trsvcid": "4420" 00:23:02.954 }, 00:23:02.954 "peer_address": { 00:23:02.954 "trtype": "TCP", 00:23:02.954 "adrfam": "IPv4", 00:23:02.954 "traddr": "10.0.0.1", 00:23:02.954 "trsvcid": "43002" 00:23:02.954 }, 00:23:02.954 "auth": { 00:23:02.954 "state": "completed", 00:23:02.954 "digest": "sha256", 00:23:02.954 "dhgroup": "ffdhe6144" 00:23:02.954 } 00:23:02.954 } 00:23:02.954 ]' 00:23:02.954 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:02.954 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:02.954 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:02.954 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:02.954 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:03.214 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:03.214 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:03.214 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.214 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:03.214 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.159 17:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.420 00:23:04.420 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:04.420 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:04.420 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.680 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.680 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.680 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.680 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.681 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.681 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:04.681 { 00:23:04.681 "cntlid": 35, 00:23:04.681 "qid": 0, 00:23:04.681 "state": "enabled", 00:23:04.681 "thread": "nvmf_tgt_poll_group_000", 00:23:04.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:04.681 "listen_address": { 00:23:04.681 "trtype": "TCP", 00:23:04.681 "adrfam": "IPv4", 00:23:04.681 "traddr": "10.0.0.2", 00:23:04.681 "trsvcid": "4420" 00:23:04.681 }, 00:23:04.681 "peer_address": { 00:23:04.681 "trtype": "TCP", 00:23:04.681 "adrfam": "IPv4", 00:23:04.681 "traddr": "10.0.0.1", 00:23:04.681 "trsvcid": "43024" 00:23:04.681 }, 00:23:04.681 "auth": { 00:23:04.681 "state": "completed", 00:23:04.681 "digest": "sha256", 00:23:04.681 "dhgroup": "ffdhe6144" 00:23:04.681 } 00:23:04.681 } 00:23:04.681 ]' 00:23:04.681 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:04.681 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:04.681 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:04.681 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:04.681 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:04.941 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.941 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.941 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.941 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:04.941 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.884 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.145 00:23:06.145 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:06.145 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:06.145 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.405 17:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.405 17:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.405 17:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.405 17:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.405 17:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.405 17:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:06.405 { 00:23:06.405 "cntlid": 37, 00:23:06.405 "qid": 0, 00:23:06.405 "state": "enabled", 00:23:06.405 "thread": "nvmf_tgt_poll_group_000", 00:23:06.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:06.405 "listen_address": { 00:23:06.405 "trtype": "TCP", 00:23:06.405 "adrfam": "IPv4", 00:23:06.405 "traddr": "10.0.0.2", 00:23:06.405 "trsvcid": "4420" 00:23:06.405 }, 00:23:06.405 "peer_address": { 00:23:06.405 "trtype": "TCP", 00:23:06.405 "adrfam": "IPv4", 00:23:06.405 "traddr": "10.0.0.1", 00:23:06.405 "trsvcid": "32848" 00:23:06.405 }, 00:23:06.405 "auth": { 00:23:06.405 "state": "completed", 00:23:06.405 "digest": "sha256", 00:23:06.405 "dhgroup": "ffdhe6144" 00:23:06.405 } 00:23:06.405 } 00:23:06.405 ]' 00:23:06.405 17:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:06.405 17:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:06.405 17:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:06.405 17:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:06.405 17:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:06.666 17:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.666 17:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.666 17:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.667 17:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:23:06.667 17:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.607 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:07.608 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:07.608 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:07.867 00:23:07.867 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:07.867 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:07.867 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.127 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.127 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.127 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.127 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.127 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.127 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:08.127 { 00:23:08.127 "cntlid": 39, 00:23:08.127 "qid": 0, 00:23:08.127 "state": "enabled", 00:23:08.127 "thread": "nvmf_tgt_poll_group_000", 00:23:08.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:08.127 "listen_address": { 00:23:08.127 "trtype": "TCP", 00:23:08.127 "adrfam": "IPv4", 00:23:08.127 "traddr": "10.0.0.2", 00:23:08.127 "trsvcid": "4420" 00:23:08.127 }, 00:23:08.127 "peer_address": { 00:23:08.127 "trtype": "TCP", 00:23:08.127 "adrfam": "IPv4", 00:23:08.127 "traddr": "10.0.0.1", 00:23:08.127 "trsvcid": "32884" 00:23:08.127 }, 00:23:08.127 "auth": { 00:23:08.127 "state": "completed", 00:23:08.127 "digest": "sha256", 00:23:08.127 "dhgroup": "ffdhe6144" 00:23:08.127 } 00:23:08.127 } 00:23:08.127 ]' 00:23:08.127 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:08.128 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:08.128 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:08.128 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:08.128 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:08.389 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.389 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.389 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.389 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:08.389 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:09.330 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.330 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:09.330 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.330 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.331 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.331 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:09.331 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:09.331 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:09.331 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:09.331 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:23:09.331 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:09.331 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:09.331 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:09.331 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:09.331 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.331 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.331 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.331 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.331 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.331 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.331 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.331 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.903 00:23:09.903 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:09.903 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:09.903 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.903 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.903 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.903 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.903 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.903 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.903 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:09.903 { 00:23:09.903 "cntlid": 41, 00:23:09.903 "qid": 0, 00:23:09.903 "state": "enabled", 00:23:09.903 "thread": "nvmf_tgt_poll_group_000", 00:23:09.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:09.903 "listen_address": { 00:23:09.903 "trtype": "TCP", 00:23:09.903 "adrfam": "IPv4", 00:23:09.903 "traddr": "10.0.0.2", 00:23:09.903 "trsvcid": "4420" 00:23:09.903 }, 00:23:09.903 "peer_address": { 00:23:09.903 "trtype": "TCP", 00:23:09.903 "adrfam": "IPv4", 00:23:09.903 "traddr": "10.0.0.1", 00:23:09.903 "trsvcid": "32908" 00:23:09.903 }, 00:23:09.903 "auth": { 00:23:09.903 "state": "completed", 00:23:09.903 "digest": "sha256", 00:23:09.903 "dhgroup": "ffdhe8192" 00:23:09.903 } 00:23:09.903 } 00:23:09.903 ]' 00:23:09.903 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:09.903 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:10.164 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:10.164 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:10.164 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:10.164 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.164 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.164 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.425 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:10.425 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:10.996 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.996 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:10.996 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.996 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.996 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.996 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:10.996 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:10.996 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:11.257 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:23:11.257 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:11.257 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:11.257 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:11.257 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:11.257 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.257 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.257 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.257 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.257 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.257 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.257 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.257 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.517 00:23:11.517 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:11.517 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:11.517 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.777 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.777 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.777 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.777 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.777 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.777 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:11.777 { 00:23:11.777 "cntlid": 43, 00:23:11.777 "qid": 0, 00:23:11.777 "state": "enabled", 00:23:11.777 "thread": "nvmf_tgt_poll_group_000", 00:23:11.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:11.777 "listen_address": { 00:23:11.777 "trtype": "TCP", 00:23:11.777 "adrfam": "IPv4", 00:23:11.777 "traddr": "10.0.0.2", 00:23:11.777 "trsvcid": "4420" 00:23:11.777 }, 00:23:11.777 "peer_address": { 00:23:11.777 "trtype": "TCP", 00:23:11.777 "adrfam": "IPv4", 00:23:11.777 "traddr": "10.0.0.1", 00:23:11.777 "trsvcid": "32944" 00:23:11.777 }, 00:23:11.777 "auth": { 00:23:11.777 "state": "completed", 00:23:11.777 "digest": "sha256", 00:23:11.777 "dhgroup": "ffdhe8192" 00:23:11.777 } 00:23:11.777 } 00:23:11.777 ]' 00:23:11.777 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:11.777 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:11.777 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:12.038 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:12.038 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:12.038 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.038 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.038 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.038 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:12.038 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.981 17:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.554 00:23:13.554 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:13.554 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:13.554 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.554 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.554 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.554 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.554 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.554 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.554 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:13.554 { 00:23:13.554 "cntlid": 45, 00:23:13.554 "qid": 0, 00:23:13.554 "state": "enabled", 00:23:13.554 "thread": "nvmf_tgt_poll_group_000", 00:23:13.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:13.554 "listen_address": { 00:23:13.554 "trtype": "TCP", 00:23:13.554 "adrfam": "IPv4", 00:23:13.554 "traddr": "10.0.0.2", 00:23:13.554 "trsvcid": "4420" 00:23:13.554 }, 00:23:13.554 "peer_address": { 00:23:13.554 "trtype": "TCP", 00:23:13.554 "adrfam": "IPv4", 00:23:13.554 "traddr": "10.0.0.1", 00:23:13.554 "trsvcid": "32966" 00:23:13.554 }, 00:23:13.554 "auth": { 00:23:13.554 "state": "completed", 00:23:13.554 "digest": "sha256", 00:23:13.554 "dhgroup": "ffdhe8192" 00:23:13.554 } 00:23:13.554 } 00:23:13.554 ]' 00:23:13.554 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:13.815 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:13.815 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:13.815 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:13.815 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:13.815 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.815 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.815 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.075 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:23:14.075 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:23:14.646 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.646 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:14.646 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.646 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.646 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.646 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:14.646 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:14.646 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:14.906 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:23:14.906 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:14.906 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:14.906 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:14.906 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:14.906 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.906 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:14.906 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.906 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.906 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.906 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:14.906 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:14.906 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:15.166 00:23:15.426 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:15.426 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:15.426 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.426 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.426 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.426 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.426 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.426 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.426 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:15.426 { 00:23:15.426 "cntlid": 47, 00:23:15.426 "qid": 0, 00:23:15.426 "state": "enabled", 00:23:15.426 "thread": "nvmf_tgt_poll_group_000", 00:23:15.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:15.426 "listen_address": { 00:23:15.426 "trtype": "TCP", 00:23:15.426 "adrfam": "IPv4", 00:23:15.426 "traddr": "10.0.0.2", 00:23:15.426 "trsvcid": "4420" 00:23:15.426 }, 00:23:15.426 "peer_address": { 00:23:15.426 "trtype": "TCP", 00:23:15.426 "adrfam": "IPv4", 00:23:15.426 "traddr": "10.0.0.1", 00:23:15.426 "trsvcid": "34126" 00:23:15.426 }, 00:23:15.426 "auth": { 00:23:15.426 "state": "completed", 00:23:15.426 "digest": "sha256", 00:23:15.426 "dhgroup": "ffdhe8192" 00:23:15.426 } 00:23:15.426 } 00:23:15.426 ]' 00:23:15.426 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:15.687 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:15.687 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:15.687 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:15.687 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:15.687 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.687 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.687 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.948 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:15.948 17:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:16.518 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.518 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:16.518 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.518 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.518 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.518 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:16.518 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.518 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:16.518 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:16.518 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:16.780 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:23:16.780 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:16.780 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:16.780 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:16.780 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:16.780 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.780 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.780 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.780 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.780 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.780 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.780 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.780 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.780 00:23:17.041 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:17.041 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.041 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:17.041 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.041 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.041 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.041 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.041 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.041 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:17.041 { 00:23:17.041 "cntlid": 49, 00:23:17.041 "qid": 0, 00:23:17.041 "state": "enabled", 00:23:17.041 "thread": "nvmf_tgt_poll_group_000", 00:23:17.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:17.041 "listen_address": { 00:23:17.041 "trtype": "TCP", 00:23:17.041 "adrfam": "IPv4", 00:23:17.041 "traddr": "10.0.0.2", 00:23:17.041 "trsvcid": "4420" 00:23:17.041 }, 00:23:17.041 "peer_address": { 00:23:17.041 "trtype": "TCP", 00:23:17.041 "adrfam": "IPv4", 00:23:17.041 "traddr": "10.0.0.1", 00:23:17.041 "trsvcid": "34144" 00:23:17.041 }, 00:23:17.041 "auth": { 00:23:17.041 "state": "completed", 00:23:17.041 "digest": "sha384", 00:23:17.041 "dhgroup": "null" 00:23:17.041 } 00:23:17.041 } 00:23:17.041 ]' 00:23:17.041 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:17.041 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:17.041 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:17.301 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:17.301 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:17.301 17:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.301 17:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.302 17:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.563 17:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:17.563 17:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:18.131 17:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.131 17:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:18.131 17:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.131 17:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.131 17:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.131 17:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:18.131 17:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:18.131 17:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:18.391 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:23:18.391 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:18.391 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:18.391 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:18.391 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:18.391 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:18.391 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.391 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.391 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.391 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.391 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.391 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.391 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.391 00:23:18.391 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:18.391 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:18.391 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:18.651 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.651 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:18.651 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.651 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.651 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.651 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:18.651 { 00:23:18.651 "cntlid": 51, 00:23:18.651 "qid": 0, 00:23:18.651 "state": "enabled", 00:23:18.651 "thread": "nvmf_tgt_poll_group_000", 00:23:18.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:18.651 "listen_address": { 00:23:18.651 "trtype": "TCP", 00:23:18.651 "adrfam": "IPv4", 00:23:18.651 "traddr": "10.0.0.2", 00:23:18.651 "trsvcid": "4420" 00:23:18.651 }, 00:23:18.651 "peer_address": { 00:23:18.651 "trtype": "TCP", 00:23:18.651 "adrfam": "IPv4", 00:23:18.651 "traddr": "10.0.0.1", 00:23:18.651 "trsvcid": "34178" 00:23:18.651 }, 00:23:18.651 "auth": { 00:23:18.651 "state": "completed", 00:23:18.651 "digest": "sha384", 00:23:18.651 "dhgroup": "null" 00:23:18.651 } 00:23:18.651 } 00:23:18.652 ]' 00:23:18.652 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:18.652 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:18.652 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:18.912 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:18.912 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:18.912 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:18.912 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:18.912 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.912 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:18.912 17:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:19.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.854 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.114 00:23:20.114 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:20.114 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:20.114 17:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.375 17:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.375 17:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.375 17:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.375 17:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.375 17:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.375 17:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:20.375 { 00:23:20.375 "cntlid": 53, 00:23:20.375 "qid": 0, 00:23:20.375 "state": "enabled", 00:23:20.375 "thread": "nvmf_tgt_poll_group_000", 00:23:20.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:20.375 "listen_address": { 00:23:20.375 "trtype": "TCP", 00:23:20.375 "adrfam": "IPv4", 00:23:20.375 "traddr": "10.0.0.2", 00:23:20.375 "trsvcid": "4420" 00:23:20.375 }, 00:23:20.375 "peer_address": { 00:23:20.375 "trtype": "TCP", 00:23:20.375 "adrfam": "IPv4", 00:23:20.375 "traddr": "10.0.0.1", 00:23:20.375 "trsvcid": "34202" 00:23:20.375 }, 00:23:20.375 "auth": { 00:23:20.375 "state": "completed", 00:23:20.375 "digest": "sha384", 00:23:20.375 "dhgroup": "null" 00:23:20.375 } 00:23:20.375 } 00:23:20.375 ]' 00:23:20.375 17:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:20.375 17:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:20.375 17:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:20.375 17:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:20.375 17:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:20.375 17:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.375 17:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.375 17:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:20.635 17:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:23:20.635 17:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:23:21.203 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.203 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:21.203 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.203 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.203 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.203 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:21.203 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:21.203 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:21.463 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:23:21.463 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:21.463 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:21.463 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:21.463 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:21.463 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:21.463 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:21.463 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.463 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.463 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.463 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:21.463 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:21.463 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:21.723 00:23:21.723 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:21.723 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:21.723 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.984 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.984 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:21.984 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.984 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.984 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.984 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:21.984 { 00:23:21.984 "cntlid": 55, 00:23:21.984 "qid": 0, 00:23:21.984 "state": "enabled", 00:23:21.984 "thread": "nvmf_tgt_poll_group_000", 00:23:21.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:21.984 "listen_address": { 00:23:21.984 "trtype": "TCP", 00:23:21.984 "adrfam": "IPv4", 00:23:21.984 "traddr": "10.0.0.2", 00:23:21.984 "trsvcid": "4420" 00:23:21.984 }, 00:23:21.984 "peer_address": { 00:23:21.984 "trtype": "TCP", 00:23:21.984 "adrfam": "IPv4", 00:23:21.984 "traddr": "10.0.0.1", 00:23:21.984 "trsvcid": "34232" 00:23:21.984 }, 00:23:21.984 "auth": { 00:23:21.984 "state": "completed", 00:23:21.984 "digest": "sha384", 00:23:21.984 "dhgroup": "null" 00:23:21.984 } 00:23:21.984 } 00:23:21.984 ]' 00:23:21.984 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:21.984 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:21.984 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:21.984 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:21.984 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:21.984 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:21.984 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:21.984 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.245 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:22.245 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:22.816 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:22.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:22.816 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:22.816 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.816 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.816 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.816 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:22.816 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:22.816 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:22.816 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:23.076 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:23:23.076 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:23.076 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:23.076 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:23.076 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:23.076 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:23.076 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.076 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.076 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.076 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.076 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.076 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.076 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.336 00:23:23.336 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:23.336 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:23.336 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.595 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.595 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:23.595 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.595 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.595 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.595 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:23.595 { 00:23:23.595 "cntlid": 57, 00:23:23.595 "qid": 0, 00:23:23.595 "state": "enabled", 00:23:23.595 "thread": "nvmf_tgt_poll_group_000", 00:23:23.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:23.596 "listen_address": { 00:23:23.596 "trtype": "TCP", 00:23:23.596 "adrfam": "IPv4", 00:23:23.596 "traddr": "10.0.0.2", 00:23:23.596 "trsvcid": "4420" 00:23:23.596 }, 00:23:23.596 "peer_address": { 00:23:23.596 "trtype": "TCP", 00:23:23.596 "adrfam": "IPv4", 00:23:23.596 "traddr": "10.0.0.1", 00:23:23.596 "trsvcid": "34272" 00:23:23.596 }, 00:23:23.596 "auth": { 00:23:23.596 "state": "completed", 00:23:23.596 "digest": "sha384", 00:23:23.596 "dhgroup": "ffdhe2048" 00:23:23.596 } 00:23:23.596 } 00:23:23.596 ]' 00:23:23.596 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:23.596 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:23.596 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:23.596 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:23.596 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:23.596 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:23.596 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:23.596 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:23.855 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:23.855 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:24.425 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.686 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.946 00:23:24.946 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:24.946 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:24.946 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.210 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.210 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:25.210 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.210 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.210 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.210 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:25.210 { 00:23:25.210 "cntlid": 59, 00:23:25.210 "qid": 0, 00:23:25.210 "state": "enabled", 00:23:25.210 "thread": "nvmf_tgt_poll_group_000", 00:23:25.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:25.210 "listen_address": { 00:23:25.210 "trtype": "TCP", 00:23:25.210 "adrfam": "IPv4", 00:23:25.210 "traddr": "10.0.0.2", 00:23:25.210 "trsvcid": "4420" 00:23:25.210 }, 00:23:25.210 "peer_address": { 00:23:25.210 "trtype": "TCP", 00:23:25.210 "adrfam": "IPv4", 00:23:25.210 "traddr": "10.0.0.1", 00:23:25.210 "trsvcid": "34306" 00:23:25.210 }, 00:23:25.210 "auth": { 00:23:25.210 "state": "completed", 00:23:25.210 "digest": "sha384", 00:23:25.210 "dhgroup": "ffdhe2048" 00:23:25.210 } 00:23:25.210 } 00:23:25.210 ]' 00:23:25.210 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:25.210 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:25.210 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:25.210 17:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:25.210 17:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:25.210 17:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:25.210 17:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:25.210 17:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:25.470 17:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:25.470 17:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:26.039 17:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:26.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:26.039 17:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:26.039 17:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.039 17:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.039 17:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.039 17:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:26.039 17:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:26.039 17:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:26.299 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:23:26.299 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:26.299 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:26.299 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:26.299 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:26.299 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:26.299 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.299 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.299 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.299 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.299 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.299 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.299 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.559 00:23:26.559 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:26.559 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:26.559 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.820 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.820 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:26.820 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.820 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.820 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.820 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:26.820 { 00:23:26.820 "cntlid": 61, 00:23:26.820 "qid": 0, 00:23:26.820 "state": "enabled", 00:23:26.820 "thread": "nvmf_tgt_poll_group_000", 00:23:26.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:26.820 "listen_address": { 00:23:26.820 "trtype": "TCP", 00:23:26.820 "adrfam": "IPv4", 00:23:26.820 "traddr": "10.0.0.2", 00:23:26.820 "trsvcid": "4420" 00:23:26.820 }, 00:23:26.820 "peer_address": { 00:23:26.820 "trtype": "TCP", 00:23:26.820 "adrfam": "IPv4", 00:23:26.820 "traddr": "10.0.0.1", 00:23:26.820 "trsvcid": "34338" 00:23:26.820 }, 00:23:26.820 "auth": { 00:23:26.820 "state": "completed", 00:23:26.820 "digest": "sha384", 00:23:26.820 "dhgroup": "ffdhe2048" 00:23:26.820 } 00:23:26.820 } 00:23:26.820 ]' 00:23:26.820 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:26.820 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:26.820 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:26.820 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:26.820 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:26.820 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:26.820 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.820 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.081 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:23:27.081 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:23:27.651 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.651 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:27.651 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.651 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.911 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.911 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:27.911 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:27.911 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:27.911 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:23:27.911 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:27.911 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:27.911 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:27.911 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:27.911 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.911 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:27.911 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.911 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.911 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.911 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:27.911 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:27.911 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:28.172 00:23:28.172 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:28.172 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:28.172 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.465 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.465 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:28.465 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.465 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.465 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.465 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:28.465 { 00:23:28.465 "cntlid": 63, 00:23:28.465 "qid": 0, 00:23:28.465 "state": "enabled", 00:23:28.465 "thread": "nvmf_tgt_poll_group_000", 00:23:28.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:28.465 "listen_address": { 00:23:28.465 "trtype": "TCP", 00:23:28.465 "adrfam": "IPv4", 00:23:28.465 "traddr": "10.0.0.2", 00:23:28.465 "trsvcid": "4420" 00:23:28.465 }, 00:23:28.465 "peer_address": { 00:23:28.465 "trtype": "TCP", 00:23:28.465 "adrfam": "IPv4", 00:23:28.465 "traddr": "10.0.0.1", 00:23:28.465 "trsvcid": "34362" 00:23:28.465 }, 00:23:28.465 "auth": { 00:23:28.465 "state": "completed", 00:23:28.465 "digest": "sha384", 00:23:28.465 "dhgroup": "ffdhe2048" 00:23:28.465 } 00:23:28.465 } 00:23:28.465 ]' 00:23:28.465 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:28.465 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:28.465 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:28.466 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:28.466 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:28.466 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.466 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.466 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.768 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:28.768 17:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:29.341 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:29.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:29.341 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:29.341 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.341 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.341 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.341 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.341 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:29.341 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:29.341 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:29.601 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:23:29.601 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:29.601 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:29.601 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:29.601 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:29.601 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:29.601 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.601 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.601 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.601 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.601 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.601 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.601 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.862 00:23:29.862 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:29.862 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:29.862 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.862 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.862 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.862 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.862 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.123 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.123 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:30.123 { 00:23:30.123 "cntlid": 65, 00:23:30.123 "qid": 0, 00:23:30.123 "state": "enabled", 00:23:30.123 "thread": "nvmf_tgt_poll_group_000", 00:23:30.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:30.123 "listen_address": { 00:23:30.123 "trtype": "TCP", 00:23:30.123 "adrfam": "IPv4", 00:23:30.123 "traddr": "10.0.0.2", 00:23:30.123 "trsvcid": "4420" 00:23:30.123 }, 00:23:30.123 "peer_address": { 00:23:30.123 "trtype": "TCP", 00:23:30.123 "adrfam": "IPv4", 00:23:30.123 "traddr": "10.0.0.1", 00:23:30.123 "trsvcid": "34396" 00:23:30.123 }, 00:23:30.123 "auth": { 00:23:30.123 "state": "completed", 00:23:30.123 "digest": "sha384", 00:23:30.123 "dhgroup": "ffdhe3072" 00:23:30.123 } 00:23:30.123 } 00:23:30.123 ]' 00:23:30.123 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:30.123 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:30.123 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:30.123 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:30.123 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:30.123 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:30.123 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.123 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.384 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:30.384 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:30.953 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:30.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:30.953 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:30.953 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.953 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.953 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.953 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:30.953 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:30.953 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:31.212 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:23:31.212 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:31.212 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:31.212 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:31.212 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:31.212 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.212 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.212 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.212 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.212 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.212 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.212 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.212 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.472 00:23:31.472 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:31.472 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:31.472 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:31.472 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.472 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:31.472 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.472 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.733 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.733 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:31.733 { 00:23:31.733 "cntlid": 67, 00:23:31.733 "qid": 0, 00:23:31.733 "state": "enabled", 00:23:31.733 "thread": "nvmf_tgt_poll_group_000", 00:23:31.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:31.733 "listen_address": { 00:23:31.733 "trtype": "TCP", 00:23:31.733 "adrfam": "IPv4", 00:23:31.733 "traddr": "10.0.0.2", 00:23:31.733 "trsvcid": "4420" 00:23:31.733 }, 00:23:31.733 "peer_address": { 00:23:31.733 "trtype": "TCP", 00:23:31.733 "adrfam": "IPv4", 00:23:31.733 "traddr": "10.0.0.1", 00:23:31.733 "trsvcid": "34428" 00:23:31.733 }, 00:23:31.733 "auth": { 00:23:31.733 "state": "completed", 00:23:31.733 "digest": "sha384", 00:23:31.733 "dhgroup": "ffdhe3072" 00:23:31.733 } 00:23:31.733 } 00:23:31.733 ]' 00:23:31.733 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:31.733 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:31.733 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:31.733 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:31.733 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:31.733 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:31.733 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:31.733 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:31.993 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:31.993 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:32.560 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:32.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:32.560 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:32.560 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.560 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.560 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.560 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:32.560 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:32.560 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:32.818 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:23:32.818 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:32.818 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:32.818 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:32.818 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:32.818 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:32.818 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.818 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.818 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.818 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.818 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.818 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.818 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.078 00:23:33.078 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:33.078 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:33.078 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:33.337 17:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.337 17:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:33.337 17:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.337 17:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.337 17:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.337 17:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:33.337 { 00:23:33.337 "cntlid": 69, 00:23:33.337 "qid": 0, 00:23:33.337 "state": "enabled", 00:23:33.337 "thread": "nvmf_tgt_poll_group_000", 00:23:33.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:33.337 "listen_address": { 00:23:33.337 "trtype": "TCP", 00:23:33.337 "adrfam": "IPv4", 00:23:33.337 "traddr": "10.0.0.2", 00:23:33.337 "trsvcid": "4420" 00:23:33.337 }, 00:23:33.337 "peer_address": { 00:23:33.337 "trtype": "TCP", 00:23:33.337 "adrfam": "IPv4", 00:23:33.337 "traddr": "10.0.0.1", 00:23:33.337 "trsvcid": "34456" 00:23:33.337 }, 00:23:33.337 "auth": { 00:23:33.337 "state": "completed", 00:23:33.337 "digest": "sha384", 00:23:33.337 "dhgroup": "ffdhe3072" 00:23:33.337 } 00:23:33.337 } 00:23:33.337 ]' 00:23:33.337 17:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:33.337 17:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:33.337 17:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:33.337 17:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:33.337 17:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:33.337 17:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:33.337 17:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:33.337 17:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:33.597 17:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:23:33.597 17:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:23:34.167 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:34.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:34.167 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:34.167 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.167 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.167 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.167 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:34.167 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:34.167 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:34.428 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:23:34.428 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:34.428 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:34.428 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:34.428 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:34.428 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:34.428 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:34.428 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.428 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.428 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.428 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:34.428 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:34.428 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:34.688 00:23:34.688 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:34.688 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:34.688 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.947 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.947 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:34.947 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.947 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.947 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.947 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:34.947 { 00:23:34.947 "cntlid": 71, 00:23:34.947 "qid": 0, 00:23:34.947 "state": "enabled", 00:23:34.947 "thread": "nvmf_tgt_poll_group_000", 00:23:34.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:34.947 "listen_address": { 00:23:34.947 "trtype": "TCP", 00:23:34.947 "adrfam": "IPv4", 00:23:34.947 "traddr": "10.0.0.2", 00:23:34.947 "trsvcid": "4420" 00:23:34.947 }, 00:23:34.947 "peer_address": { 00:23:34.947 "trtype": "TCP", 00:23:34.947 "adrfam": "IPv4", 00:23:34.947 "traddr": "10.0.0.1", 00:23:34.947 "trsvcid": "34490" 00:23:34.947 }, 00:23:34.947 "auth": { 00:23:34.947 "state": "completed", 00:23:34.947 "digest": "sha384", 00:23:34.947 "dhgroup": "ffdhe3072" 00:23:34.947 } 00:23:34.947 } 00:23:34.947 ]' 00:23:34.947 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:34.947 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:34.947 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:34.947 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:34.947 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:34.948 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:34.948 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:34.948 17:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:35.208 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:35.208 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:35.779 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:36.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:36.039 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:36.039 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.039 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.039 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.039 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:36.039 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:36.039 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:36.039 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:36.299 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:23:36.299 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:36.299 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:36.299 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:36.299 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:36.299 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:36.299 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.299 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.299 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.299 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.299 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.299 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.299 17:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.299 00:23:36.562 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:36.562 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.562 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:36.562 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.562 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.562 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.562 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.562 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.562 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:36.562 { 00:23:36.562 "cntlid": 73, 00:23:36.562 "qid": 0, 00:23:36.562 "state": "enabled", 00:23:36.562 "thread": "nvmf_tgt_poll_group_000", 00:23:36.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:36.562 "listen_address": { 00:23:36.562 "trtype": "TCP", 00:23:36.562 "adrfam": "IPv4", 00:23:36.562 "traddr": "10.0.0.2", 00:23:36.562 "trsvcid": "4420" 00:23:36.562 }, 00:23:36.562 "peer_address": { 00:23:36.562 "trtype": "TCP", 00:23:36.562 "adrfam": "IPv4", 00:23:36.562 "traddr": "10.0.0.1", 00:23:36.562 "trsvcid": "60520" 00:23:36.562 }, 00:23:36.562 "auth": { 00:23:36.562 "state": "completed", 00:23:36.562 "digest": "sha384", 00:23:36.562 "dhgroup": "ffdhe4096" 00:23:36.562 } 00:23:36.562 } 00:23:36.562 ]' 00:23:36.562 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:36.826 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:36.826 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:36.826 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:36.826 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:36.826 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.826 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.826 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:37.087 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:37.087 17:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:37.659 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.659 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:37.659 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.659 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.659 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.659 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:37.659 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:37.659 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:37.920 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:23:37.920 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:37.920 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:37.920 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:37.920 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:37.920 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:37.920 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.920 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.920 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.920 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.920 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.920 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.920 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.180 00:23:38.180 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:38.180 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:38.180 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:38.180 17:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.180 17:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:38.180 17:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.180 17:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.180 17:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.180 17:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:38.180 { 00:23:38.180 "cntlid": 75, 00:23:38.180 "qid": 0, 00:23:38.180 "state": "enabled", 00:23:38.180 "thread": "nvmf_tgt_poll_group_000", 00:23:38.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:38.180 "listen_address": { 00:23:38.180 "trtype": "TCP", 00:23:38.180 "adrfam": "IPv4", 00:23:38.180 "traddr": "10.0.0.2", 00:23:38.180 "trsvcid": "4420" 00:23:38.180 }, 00:23:38.180 "peer_address": { 00:23:38.180 "trtype": "TCP", 00:23:38.180 "adrfam": "IPv4", 00:23:38.180 "traddr": "10.0.0.1", 00:23:38.180 "trsvcid": "60554" 00:23:38.180 }, 00:23:38.180 "auth": { 00:23:38.180 "state": "completed", 00:23:38.180 "digest": "sha384", 00:23:38.180 "dhgroup": "ffdhe4096" 00:23:38.180 } 00:23:38.180 } 00:23:38.180 ]' 00:23:38.180 17:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:38.440 17:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:38.440 17:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:38.440 17:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:38.440 17:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:38.440 17:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:38.441 17:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:38.441 17:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:38.700 17:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:38.701 17:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:39.270 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:39.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:39.270 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:39.270 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.270 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.270 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.270 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:39.270 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:39.270 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:39.529 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:23:39.529 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:39.529 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:39.529 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:39.529 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:39.529 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:39.529 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.529 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.529 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.529 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.529 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.529 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.529 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.789 00:23:39.789 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:39.789 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:39.789 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:40.049 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.049 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:40.049 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.049 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.049 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.049 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:40.050 { 00:23:40.050 "cntlid": 77, 00:23:40.050 "qid": 0, 00:23:40.050 "state": "enabled", 00:23:40.050 "thread": "nvmf_tgt_poll_group_000", 00:23:40.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:40.050 "listen_address": { 00:23:40.050 "trtype": "TCP", 00:23:40.050 "adrfam": "IPv4", 00:23:40.050 "traddr": "10.0.0.2", 00:23:40.050 "trsvcid": "4420" 00:23:40.050 }, 00:23:40.050 "peer_address": { 00:23:40.050 "trtype": "TCP", 00:23:40.050 "adrfam": "IPv4", 00:23:40.050 "traddr": "10.0.0.1", 00:23:40.050 "trsvcid": "60576" 00:23:40.050 }, 00:23:40.050 "auth": { 00:23:40.050 "state": "completed", 00:23:40.050 "digest": "sha384", 00:23:40.050 "dhgroup": "ffdhe4096" 00:23:40.050 } 00:23:40.050 } 00:23:40.050 ]' 00:23:40.050 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:40.050 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:40.050 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:40.050 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:40.050 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:40.050 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:40.050 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:40.050 17:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:40.311 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:23:40.311 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:23:40.882 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:40.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:40.882 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:40.882 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.882 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.882 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.882 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:40.882 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:40.882 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:41.142 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:23:41.142 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:41.142 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:41.142 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:41.143 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:41.143 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:41.143 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:41.143 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.143 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.143 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.143 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:41.143 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:41.143 17:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:41.403 00:23:41.403 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:41.403 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:41.403 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:41.664 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.664 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:41.664 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.664 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.664 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.664 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:41.664 { 00:23:41.664 "cntlid": 79, 00:23:41.664 "qid": 0, 00:23:41.664 "state": "enabled", 00:23:41.664 "thread": "nvmf_tgt_poll_group_000", 00:23:41.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:41.664 "listen_address": { 00:23:41.664 "trtype": "TCP", 00:23:41.664 "adrfam": "IPv4", 00:23:41.664 "traddr": "10.0.0.2", 00:23:41.664 "trsvcid": "4420" 00:23:41.664 }, 00:23:41.664 "peer_address": { 00:23:41.664 "trtype": "TCP", 00:23:41.664 "adrfam": "IPv4", 00:23:41.664 "traddr": "10.0.0.1", 00:23:41.664 "trsvcid": "60600" 00:23:41.664 }, 00:23:41.664 "auth": { 00:23:41.664 "state": "completed", 00:23:41.664 "digest": "sha384", 00:23:41.664 "dhgroup": "ffdhe4096" 00:23:41.664 } 00:23:41.664 } 00:23:41.664 ]' 00:23:41.664 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:41.664 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:41.664 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:41.664 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:41.664 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:41.664 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:41.664 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:41.664 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:41.926 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:41.926 17:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:42.496 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:42.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:42.496 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:42.496 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.496 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.496 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.496 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:42.496 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:42.496 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:42.496 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:42.756 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:23:42.756 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:42.756 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:42.756 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:42.756 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:42.756 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:42.756 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.756 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.756 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.756 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.756 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.756 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.756 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:43.018 00:23:43.018 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:43.018 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:43.018 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:43.278 17:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.278 17:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:43.278 17:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.278 17:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.278 17:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.278 17:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:43.278 { 00:23:43.278 "cntlid": 81, 00:23:43.278 "qid": 0, 00:23:43.278 "state": "enabled", 00:23:43.278 "thread": "nvmf_tgt_poll_group_000", 00:23:43.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:43.278 "listen_address": { 00:23:43.278 "trtype": "TCP", 00:23:43.278 "adrfam": "IPv4", 00:23:43.278 "traddr": "10.0.0.2", 00:23:43.278 "trsvcid": "4420" 00:23:43.278 }, 00:23:43.278 "peer_address": { 00:23:43.278 "trtype": "TCP", 00:23:43.278 "adrfam": "IPv4", 00:23:43.278 "traddr": "10.0.0.1", 00:23:43.278 "trsvcid": "60610" 00:23:43.278 }, 00:23:43.278 "auth": { 00:23:43.278 "state": "completed", 00:23:43.278 "digest": "sha384", 00:23:43.278 "dhgroup": "ffdhe6144" 00:23:43.278 } 00:23:43.278 } 00:23:43.278 ]' 00:23:43.278 17:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:43.278 17:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:43.278 17:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:43.278 17:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:43.278 17:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:43.538 17:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:43.538 17:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:43.538 17:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:43.538 17:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:43.539 17:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:44.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.480 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.740 00:23:44.740 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:44.740 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:44.740 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:45.001 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.001 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:45.001 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.001 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.001 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.001 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:45.001 { 00:23:45.001 "cntlid": 83, 00:23:45.001 "qid": 0, 00:23:45.001 "state": "enabled", 00:23:45.001 "thread": "nvmf_tgt_poll_group_000", 00:23:45.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:45.001 "listen_address": { 00:23:45.001 "trtype": "TCP", 00:23:45.001 "adrfam": "IPv4", 00:23:45.001 "traddr": "10.0.0.2", 00:23:45.001 "trsvcid": "4420" 00:23:45.001 }, 00:23:45.001 "peer_address": { 00:23:45.001 "trtype": "TCP", 00:23:45.001 "adrfam": "IPv4", 00:23:45.001 "traddr": "10.0.0.1", 00:23:45.001 "trsvcid": "60622" 00:23:45.001 }, 00:23:45.001 "auth": { 00:23:45.001 "state": "completed", 00:23:45.001 "digest": "sha384", 00:23:45.001 "dhgroup": "ffdhe6144" 00:23:45.001 } 00:23:45.001 } 00:23:45.001 ]' 00:23:45.001 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:45.001 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:45.001 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:45.001 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:45.001 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:45.001 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:45.001 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:45.001 17:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:45.261 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:45.261 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:45.832 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:45.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:45.832 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:45.832 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.832 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.832 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.832 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:45.832 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:45.832 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:46.094 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:23:46.094 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:46.094 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:46.094 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:46.094 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:46.094 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:46.094 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.094 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.094 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.094 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.094 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.094 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.094 17:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.356 00:23:46.356 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:46.356 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:46.356 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:46.616 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.616 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:46.616 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.616 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.616 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.616 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:46.616 { 00:23:46.616 "cntlid": 85, 00:23:46.616 "qid": 0, 00:23:46.616 "state": "enabled", 00:23:46.616 "thread": "nvmf_tgt_poll_group_000", 00:23:46.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:46.616 "listen_address": { 00:23:46.616 "trtype": "TCP", 00:23:46.616 "adrfam": "IPv4", 00:23:46.616 "traddr": "10.0.0.2", 00:23:46.616 "trsvcid": "4420" 00:23:46.616 }, 00:23:46.616 "peer_address": { 00:23:46.616 "trtype": "TCP", 00:23:46.616 "adrfam": "IPv4", 00:23:46.616 "traddr": "10.0.0.1", 00:23:46.616 "trsvcid": "54420" 00:23:46.616 }, 00:23:46.616 "auth": { 00:23:46.616 "state": "completed", 00:23:46.616 "digest": "sha384", 00:23:46.616 "dhgroup": "ffdhe6144" 00:23:46.616 } 00:23:46.616 } 00:23:46.616 ]' 00:23:46.616 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:46.616 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:46.617 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:46.876 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:46.877 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:46.877 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:46.877 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:46.877 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:46.877 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:23:46.877 17:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:47.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:47.818 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:48.078 00:23:48.078 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:48.078 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:48.078 17:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:48.339 17:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.339 17:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:48.339 17:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.339 17:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.339 17:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.339 17:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:48.339 { 00:23:48.339 "cntlid": 87, 00:23:48.339 "qid": 0, 00:23:48.339 "state": "enabled", 00:23:48.339 "thread": "nvmf_tgt_poll_group_000", 00:23:48.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:48.339 "listen_address": { 00:23:48.339 "trtype": "TCP", 00:23:48.339 "adrfam": "IPv4", 00:23:48.339 "traddr": "10.0.0.2", 00:23:48.339 "trsvcid": "4420" 00:23:48.339 }, 00:23:48.339 "peer_address": { 00:23:48.339 "trtype": "TCP", 00:23:48.339 "adrfam": "IPv4", 00:23:48.339 "traddr": "10.0.0.1", 00:23:48.339 "trsvcid": "54454" 00:23:48.339 }, 00:23:48.339 "auth": { 00:23:48.339 "state": "completed", 00:23:48.339 "digest": "sha384", 00:23:48.339 "dhgroup": "ffdhe6144" 00:23:48.339 } 00:23:48.339 } 00:23:48.339 ]' 00:23:48.339 17:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:48.339 17:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:48.339 17:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:48.339 17:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:48.339 17:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:48.600 17:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:48.600 17:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:48.600 17:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:48.600 17:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:48.600 17:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:49.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:49.540 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.109 00:23:50.109 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:50.109 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:50.109 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.109 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.109 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:50.109 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.109 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.109 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.109 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:50.109 { 00:23:50.109 "cntlid": 89, 00:23:50.109 "qid": 0, 00:23:50.109 "state": "enabled", 00:23:50.109 "thread": "nvmf_tgt_poll_group_000", 00:23:50.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:50.109 "listen_address": { 00:23:50.109 "trtype": "TCP", 00:23:50.109 "adrfam": "IPv4", 00:23:50.110 "traddr": "10.0.0.2", 00:23:50.110 "trsvcid": "4420" 00:23:50.110 }, 00:23:50.110 "peer_address": { 00:23:50.110 "trtype": "TCP", 00:23:50.110 "adrfam": "IPv4", 00:23:50.110 "traddr": "10.0.0.1", 00:23:50.110 "trsvcid": "54468" 00:23:50.110 }, 00:23:50.110 "auth": { 00:23:50.110 "state": "completed", 00:23:50.110 "digest": "sha384", 00:23:50.110 "dhgroup": "ffdhe8192" 00:23:50.110 } 00:23:50.110 } 00:23:50.110 ]' 00:23:50.110 17:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:50.369 17:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:50.369 17:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:50.369 17:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:50.369 17:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:50.369 17:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:50.369 17:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:50.369 17:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:50.629 17:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:50.629 17:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:51.200 17:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:51.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:51.200 17:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:51.200 17:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.200 17:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.200 17:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.200 17:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:51.200 17:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:51.200 17:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:51.459 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:23:51.459 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:51.459 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:51.459 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:51.459 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:51.459 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:51.459 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:51.459 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.459 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.459 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.459 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:51.459 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:51.459 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:51.720 00:23:51.981 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:51.981 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:51.981 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:51.981 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.981 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:51.981 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.981 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.981 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.981 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:51.981 { 00:23:51.981 "cntlid": 91, 00:23:51.981 "qid": 0, 00:23:51.981 "state": "enabled", 00:23:51.981 "thread": "nvmf_tgt_poll_group_000", 00:23:51.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:51.981 "listen_address": { 00:23:51.981 "trtype": "TCP", 00:23:51.981 "adrfam": "IPv4", 00:23:51.981 "traddr": "10.0.0.2", 00:23:51.981 "trsvcid": "4420" 00:23:51.981 }, 00:23:51.981 "peer_address": { 00:23:51.981 "trtype": "TCP", 00:23:51.981 "adrfam": "IPv4", 00:23:51.981 "traddr": "10.0.0.1", 00:23:51.981 "trsvcid": "54490" 00:23:51.981 }, 00:23:51.982 "auth": { 00:23:51.982 "state": "completed", 00:23:51.982 "digest": "sha384", 00:23:51.982 "dhgroup": "ffdhe8192" 00:23:51.982 } 00:23:51.982 } 00:23:51.982 ]' 00:23:51.982 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:51.982 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:51.982 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:52.242 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:52.242 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:52.242 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:52.242 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.242 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:52.502 17:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:52.502 17:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:53.117 17:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:53.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:53.117 17:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:53.117 17:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.117 17:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.117 17:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.117 17:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:53.117 17:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:53.117 17:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:53.408 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:23:53.408 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:53.408 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:53.408 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:53.408 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:53.408 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:53.408 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:53.408 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.408 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.408 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.408 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:53.408 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:53.408 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:53.668 00:23:53.668 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:53.668 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:53.668 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:53.928 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.928 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:53.928 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.928 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.928 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.928 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:53.928 { 00:23:53.928 "cntlid": 93, 00:23:53.928 "qid": 0, 00:23:53.929 "state": "enabled", 00:23:53.929 "thread": "nvmf_tgt_poll_group_000", 00:23:53.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:53.929 "listen_address": { 00:23:53.929 "trtype": "TCP", 00:23:53.929 "adrfam": "IPv4", 00:23:53.929 "traddr": "10.0.0.2", 00:23:53.929 "trsvcid": "4420" 00:23:53.929 }, 00:23:53.929 "peer_address": { 00:23:53.929 "trtype": "TCP", 00:23:53.929 "adrfam": "IPv4", 00:23:53.929 "traddr": "10.0.0.1", 00:23:53.929 "trsvcid": "54532" 00:23:53.929 }, 00:23:53.929 "auth": { 00:23:53.929 "state": "completed", 00:23:53.929 "digest": "sha384", 00:23:53.929 "dhgroup": "ffdhe8192" 00:23:53.929 } 00:23:53.929 } 00:23:53.929 ]' 00:23:53.929 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:53.929 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:53.929 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:53.929 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:53.929 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:54.189 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:54.189 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:54.189 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:54.189 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:23:54.189 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:23:54.759 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:54.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:54.759 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:54.759 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.759 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.018 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.018 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:55.018 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:55.018 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:55.018 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:23:55.018 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:55.018 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:55.018 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:55.018 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:55.018 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:55.018 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:55.018 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.018 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.018 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.018 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:55.018 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:55.018 17:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:55.588 00:23:55.588 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:55.588 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:55.588 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:55.847 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.847 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:55.847 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.847 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.847 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.847 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:55.847 { 00:23:55.847 "cntlid": 95, 00:23:55.847 "qid": 0, 00:23:55.847 "state": "enabled", 00:23:55.847 "thread": "nvmf_tgt_poll_group_000", 00:23:55.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:55.847 "listen_address": { 00:23:55.847 "trtype": "TCP", 00:23:55.847 "adrfam": "IPv4", 00:23:55.847 "traddr": "10.0.0.2", 00:23:55.847 "trsvcid": "4420" 00:23:55.847 }, 00:23:55.847 "peer_address": { 00:23:55.847 "trtype": "TCP", 00:23:55.847 "adrfam": "IPv4", 00:23:55.847 "traddr": "10.0.0.1", 00:23:55.847 "trsvcid": "56576" 00:23:55.847 }, 00:23:55.847 "auth": { 00:23:55.847 "state": "completed", 00:23:55.847 "digest": "sha384", 00:23:55.847 "dhgroup": "ffdhe8192" 00:23:55.847 } 00:23:55.847 } 00:23:55.847 ]' 00:23:55.847 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:55.847 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:55.847 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:55.847 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:55.847 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:55.847 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:55.847 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:55.847 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:56.107 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:56.107 17:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:23:56.678 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:56.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:56.678 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:56.678 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.678 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.678 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.678 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:56.678 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:56.678 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:56.678 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:56.678 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:56.938 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:23:56.938 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:56.938 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:56.938 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:56.938 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:56.938 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:56.938 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.938 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.938 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.938 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.938 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.938 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.938 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:57.198 00:23:57.198 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:57.198 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:57.198 17:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:57.458 17:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.458 17:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:57.458 17:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.458 17:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.458 17:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.458 17:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:57.458 { 00:23:57.458 "cntlid": 97, 00:23:57.458 "qid": 0, 00:23:57.458 "state": "enabled", 00:23:57.458 "thread": "nvmf_tgt_poll_group_000", 00:23:57.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:57.458 "listen_address": { 00:23:57.458 "trtype": "TCP", 00:23:57.458 "adrfam": "IPv4", 00:23:57.458 "traddr": "10.0.0.2", 00:23:57.458 "trsvcid": "4420" 00:23:57.458 }, 00:23:57.458 "peer_address": { 00:23:57.458 "trtype": "TCP", 00:23:57.458 "adrfam": "IPv4", 00:23:57.458 "traddr": "10.0.0.1", 00:23:57.458 "trsvcid": "56588" 00:23:57.458 }, 00:23:57.458 "auth": { 00:23:57.458 "state": "completed", 00:23:57.458 "digest": "sha512", 00:23:57.458 "dhgroup": "null" 00:23:57.458 } 00:23:57.458 } 00:23:57.458 ]' 00:23:57.458 17:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:57.458 17:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:57.458 17:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:57.458 17:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:57.458 17:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:57.458 17:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:57.458 17:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:57.458 17:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:57.718 17:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:57.718 17:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:23:58.288 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:58.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:58.288 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:58.288 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.288 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.288 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.288 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:58.288 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:58.288 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:58.548 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:23:58.548 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:58.548 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:58.548 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:58.548 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:58.548 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:58.548 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.548 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.548 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.548 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.548 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.548 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.548 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.808 00:23:58.808 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:58.808 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:58.808 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:59.069 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.069 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:59.069 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.069 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.069 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.069 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:59.069 { 00:23:59.069 "cntlid": 99, 00:23:59.069 "qid": 0, 00:23:59.069 "state": "enabled", 00:23:59.069 "thread": "nvmf_tgt_poll_group_000", 00:23:59.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:59.069 "listen_address": { 00:23:59.069 "trtype": "TCP", 00:23:59.069 "adrfam": "IPv4", 00:23:59.069 "traddr": "10.0.0.2", 00:23:59.069 "trsvcid": "4420" 00:23:59.069 }, 00:23:59.069 "peer_address": { 00:23:59.069 "trtype": "TCP", 00:23:59.069 "adrfam": "IPv4", 00:23:59.069 "traddr": "10.0.0.1", 00:23:59.069 "trsvcid": "56622" 00:23:59.069 }, 00:23:59.069 "auth": { 00:23:59.069 "state": "completed", 00:23:59.069 "digest": "sha512", 00:23:59.069 "dhgroup": "null" 00:23:59.069 } 00:23:59.069 } 00:23:59.069 ]' 00:23:59.069 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:59.069 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:59.069 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:59.069 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:59.069 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:59.069 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:59.069 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:59.069 17:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:59.329 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:59.329 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:23:59.899 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:59.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:59.899 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:59.899 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.899 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.899 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.899 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:59.899 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:59.899 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:00.160 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:24:00.160 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:00.160 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:00.160 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:24:00.160 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:00.160 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:00.160 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.160 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.160 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.160 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.160 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.160 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.160 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.421 00:24:00.421 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:00.421 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:00.421 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:00.683 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.683 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:00.683 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.683 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.683 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.683 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:00.683 { 00:24:00.683 "cntlid": 101, 00:24:00.683 "qid": 0, 00:24:00.683 "state": "enabled", 00:24:00.683 "thread": "nvmf_tgt_poll_group_000", 00:24:00.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:00.683 "listen_address": { 00:24:00.683 "trtype": "TCP", 00:24:00.683 "adrfam": "IPv4", 00:24:00.683 "traddr": "10.0.0.2", 00:24:00.683 "trsvcid": "4420" 00:24:00.683 }, 00:24:00.683 "peer_address": { 00:24:00.683 "trtype": "TCP", 00:24:00.683 "adrfam": "IPv4", 00:24:00.683 "traddr": "10.0.0.1", 00:24:00.683 "trsvcid": "56664" 00:24:00.683 }, 00:24:00.683 "auth": { 00:24:00.683 "state": "completed", 00:24:00.683 "digest": "sha512", 00:24:00.683 "dhgroup": "null" 00:24:00.683 } 00:24:00.683 } 00:24:00.683 ]' 00:24:00.683 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:00.683 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:00.683 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:00.683 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:24:00.683 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:00.683 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:00.683 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:00.683 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:00.944 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:24:00.944 17:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:24:01.516 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:01.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:01.516 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.516 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.516 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.516 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.516 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:01.516 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:01.516 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:01.777 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:24:01.777 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:01.777 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:01.777 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:24:01.777 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:01.777 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:01.777 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:01.777 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.777 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.777 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.777 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:01.777 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:01.777 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:02.037 00:24:02.037 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:02.037 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:02.037 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:02.298 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.298 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:02.298 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.298 17:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.298 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.298 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:02.298 { 00:24:02.298 "cntlid": 103, 00:24:02.298 "qid": 0, 00:24:02.298 "state": "enabled", 00:24:02.298 "thread": "nvmf_tgt_poll_group_000", 00:24:02.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:02.298 "listen_address": { 00:24:02.298 "trtype": "TCP", 00:24:02.298 "adrfam": "IPv4", 00:24:02.298 "traddr": "10.0.0.2", 00:24:02.298 "trsvcid": "4420" 00:24:02.298 }, 00:24:02.298 "peer_address": { 00:24:02.298 "trtype": "TCP", 00:24:02.298 "adrfam": "IPv4", 00:24:02.298 "traddr": "10.0.0.1", 00:24:02.298 "trsvcid": "56692" 00:24:02.298 }, 00:24:02.298 "auth": { 00:24:02.298 "state": "completed", 00:24:02.298 "digest": "sha512", 00:24:02.298 "dhgroup": "null" 00:24:02.298 } 00:24:02.298 } 00:24:02.298 ]' 00:24:02.298 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:02.298 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:02.298 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:02.298 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:24:02.298 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:02.298 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:02.298 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:02.298 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:02.559 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:24:02.559 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:24:03.131 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:03.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:03.131 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:03.131 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.131 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.131 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.131 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:03.131 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:03.131 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:03.131 17:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:03.391 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:24:03.391 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:03.391 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:03.391 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:03.391 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:03.391 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:03.391 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.391 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.391 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.391 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.391 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.391 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.391 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.652 00:24:03.652 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:03.652 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:03.652 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:03.913 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.913 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:03.913 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.914 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.914 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.914 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:03.914 { 00:24:03.914 "cntlid": 105, 00:24:03.914 "qid": 0, 00:24:03.914 "state": "enabled", 00:24:03.914 "thread": "nvmf_tgt_poll_group_000", 00:24:03.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:03.914 "listen_address": { 00:24:03.914 "trtype": "TCP", 00:24:03.914 "adrfam": "IPv4", 00:24:03.914 "traddr": "10.0.0.2", 00:24:03.914 "trsvcid": "4420" 00:24:03.914 }, 00:24:03.914 "peer_address": { 00:24:03.914 "trtype": "TCP", 00:24:03.914 "adrfam": "IPv4", 00:24:03.914 "traddr": "10.0.0.1", 00:24:03.914 "trsvcid": "56718" 00:24:03.914 }, 00:24:03.914 "auth": { 00:24:03.914 "state": "completed", 00:24:03.914 "digest": "sha512", 00:24:03.914 "dhgroup": "ffdhe2048" 00:24:03.914 } 00:24:03.914 } 00:24:03.914 ]' 00:24:03.914 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:03.914 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:03.914 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:03.914 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:03.914 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:03.914 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:03.914 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:03.914 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:04.174 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:24:04.174 17:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:24:04.745 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:04.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:04.746 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:04.746 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.746 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.746 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.746 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:04.746 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:04.746 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:05.006 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:24:05.006 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:05.006 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:05.006 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:05.006 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:05.006 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:05.006 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.006 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.006 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.006 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.006 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.006 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.006 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.267 00:24:05.267 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:05.267 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:05.267 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:05.267 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.528 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:05.528 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.528 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.528 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.528 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:05.528 { 00:24:05.528 "cntlid": 107, 00:24:05.528 "qid": 0, 00:24:05.528 "state": "enabled", 00:24:05.528 "thread": "nvmf_tgt_poll_group_000", 00:24:05.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:05.528 "listen_address": { 00:24:05.528 "trtype": "TCP", 00:24:05.528 "adrfam": "IPv4", 00:24:05.528 "traddr": "10.0.0.2", 00:24:05.528 "trsvcid": "4420" 00:24:05.528 }, 00:24:05.528 "peer_address": { 00:24:05.528 "trtype": "TCP", 00:24:05.528 "adrfam": "IPv4", 00:24:05.528 "traddr": "10.0.0.1", 00:24:05.528 "trsvcid": "35020" 00:24:05.528 }, 00:24:05.528 "auth": { 00:24:05.528 "state": "completed", 00:24:05.528 "digest": "sha512", 00:24:05.528 "dhgroup": "ffdhe2048" 00:24:05.528 } 00:24:05.528 } 00:24:05.528 ]' 00:24:05.528 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:05.528 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:05.528 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:05.528 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:05.528 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:05.528 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:05.528 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:05.528 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:05.790 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:24:05.790 17:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:24:06.361 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:06.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:06.361 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:06.361 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.361 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.361 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.361 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:06.361 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:06.361 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:06.620 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:24:06.620 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:06.620 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:06.620 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:06.620 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:06.620 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:06.620 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:06.620 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.620 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.620 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.620 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:06.620 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:06.621 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:06.881 00:24:06.881 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:06.881 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:06.881 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:07.142 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.142 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:07.142 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.142 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.142 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.142 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:07.142 { 00:24:07.142 "cntlid": 109, 00:24:07.142 "qid": 0, 00:24:07.142 "state": "enabled", 00:24:07.142 "thread": "nvmf_tgt_poll_group_000", 00:24:07.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:07.142 "listen_address": { 00:24:07.142 "trtype": "TCP", 00:24:07.142 "adrfam": "IPv4", 00:24:07.142 "traddr": "10.0.0.2", 00:24:07.142 "trsvcid": "4420" 00:24:07.142 }, 00:24:07.142 "peer_address": { 00:24:07.142 "trtype": "TCP", 00:24:07.142 "adrfam": "IPv4", 00:24:07.142 "traddr": "10.0.0.1", 00:24:07.142 "trsvcid": "35042" 00:24:07.142 }, 00:24:07.142 "auth": { 00:24:07.142 "state": "completed", 00:24:07.142 "digest": "sha512", 00:24:07.142 "dhgroup": "ffdhe2048" 00:24:07.142 } 00:24:07.142 } 00:24:07.142 ]' 00:24:07.142 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:07.142 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:07.142 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:07.142 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:07.142 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:07.142 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:07.142 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:07.142 17:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:07.402 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:24:07.402 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:24:07.974 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:07.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:07.974 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:07.974 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.974 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.974 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.974 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:07.974 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:07.974 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:08.235 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:24:08.235 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:08.235 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:08.235 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:08.235 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:08.235 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:08.235 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:08.235 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.235 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.235 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.235 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:08.235 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:08.235 17:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:08.496 00:24:08.496 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:08.496 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:08.496 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:08.496 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.496 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:08.496 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.496 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.496 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.496 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:08.496 { 00:24:08.496 "cntlid": 111, 00:24:08.496 "qid": 0, 00:24:08.496 "state": "enabled", 00:24:08.496 "thread": "nvmf_tgt_poll_group_000", 00:24:08.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:08.496 "listen_address": { 00:24:08.496 "trtype": "TCP", 00:24:08.496 "adrfam": "IPv4", 00:24:08.496 "traddr": "10.0.0.2", 00:24:08.496 "trsvcid": "4420" 00:24:08.496 }, 00:24:08.496 "peer_address": { 00:24:08.496 "trtype": "TCP", 00:24:08.496 "adrfam": "IPv4", 00:24:08.496 "traddr": "10.0.0.1", 00:24:08.496 "trsvcid": "35066" 00:24:08.496 }, 00:24:08.496 "auth": { 00:24:08.496 "state": "completed", 00:24:08.496 "digest": "sha512", 00:24:08.496 "dhgroup": "ffdhe2048" 00:24:08.496 } 00:24:08.496 } 00:24:08.496 ]' 00:24:08.496 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:08.757 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:08.757 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:08.757 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:08.757 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:08.757 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:08.757 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:08.757 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:09.018 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:24:09.018 17:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:24:09.589 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:09.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:09.589 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:09.589 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.589 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.589 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.589 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:09.590 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:09.590 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:09.590 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:09.851 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:24:09.851 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:09.851 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:09.851 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:09.851 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:09.851 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:09.851 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:09.851 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.851 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.851 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.851 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:09.851 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:09.851 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:10.112 00:24:10.112 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:10.112 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:10.112 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:10.112 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.112 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:10.112 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.112 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.112 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.112 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:10.112 { 00:24:10.112 "cntlid": 113, 00:24:10.112 "qid": 0, 00:24:10.112 "state": "enabled", 00:24:10.112 "thread": "nvmf_tgt_poll_group_000", 00:24:10.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:10.112 "listen_address": { 00:24:10.112 "trtype": "TCP", 00:24:10.112 "adrfam": "IPv4", 00:24:10.112 "traddr": "10.0.0.2", 00:24:10.112 "trsvcid": "4420" 00:24:10.112 }, 00:24:10.112 "peer_address": { 00:24:10.112 "trtype": "TCP", 00:24:10.112 "adrfam": "IPv4", 00:24:10.112 "traddr": "10.0.0.1", 00:24:10.112 "trsvcid": "35100" 00:24:10.112 }, 00:24:10.112 "auth": { 00:24:10.112 "state": "completed", 00:24:10.112 "digest": "sha512", 00:24:10.112 "dhgroup": "ffdhe3072" 00:24:10.112 } 00:24:10.112 } 00:24:10.112 ]' 00:24:10.112 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:10.373 17:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:10.373 17:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:10.373 17:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:10.373 17:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:10.373 17:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:10.373 17:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:10.373 17:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:10.633 17:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:24:10.633 17:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:24:11.204 17:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:11.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:11.204 17:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:11.204 17:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.204 17:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.204 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.204 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:11.204 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:11.204 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:11.464 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:24:11.464 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:11.464 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:11.464 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:11.464 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:11.464 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:11.464 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:11.464 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.464 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.464 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.464 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:11.464 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:11.464 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:11.725 00:24:11.725 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:11.725 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:11.725 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:11.725 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.725 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:11.725 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.725 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.985 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.985 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:11.985 { 00:24:11.985 "cntlid": 115, 00:24:11.985 "qid": 0, 00:24:11.985 "state": "enabled", 00:24:11.985 "thread": "nvmf_tgt_poll_group_000", 00:24:11.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:11.985 "listen_address": { 00:24:11.985 "trtype": "TCP", 00:24:11.985 "adrfam": "IPv4", 00:24:11.985 "traddr": "10.0.0.2", 00:24:11.985 "trsvcid": "4420" 00:24:11.985 }, 00:24:11.985 "peer_address": { 00:24:11.985 "trtype": "TCP", 00:24:11.985 "adrfam": "IPv4", 00:24:11.985 "traddr": "10.0.0.1", 00:24:11.985 "trsvcid": "35138" 00:24:11.985 }, 00:24:11.985 "auth": { 00:24:11.985 "state": "completed", 00:24:11.985 "digest": "sha512", 00:24:11.985 "dhgroup": "ffdhe3072" 00:24:11.985 } 00:24:11.985 } 00:24:11.985 ]' 00:24:11.985 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:11.985 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:11.985 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:11.985 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:11.985 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:11.986 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:11.986 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:11.986 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:12.246 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:24:12.246 17:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:24:12.816 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:12.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:12.816 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:12.816 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.816 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.816 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.816 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:12.816 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:12.816 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:13.076 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:24:13.076 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:13.076 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:13.076 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:13.076 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:13.076 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:13.076 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:13.076 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.076 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.076 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.076 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:13.076 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:13.076 17:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:13.336 00:24:13.336 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:13.336 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:13.336 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:13.597 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.597 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:13.597 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.597 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.597 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.597 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:13.597 { 00:24:13.597 "cntlid": 117, 00:24:13.597 "qid": 0, 00:24:13.597 "state": "enabled", 00:24:13.597 "thread": "nvmf_tgt_poll_group_000", 00:24:13.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:13.597 "listen_address": { 00:24:13.597 "trtype": "TCP", 00:24:13.597 "adrfam": "IPv4", 00:24:13.597 "traddr": "10.0.0.2", 00:24:13.597 "trsvcid": "4420" 00:24:13.597 }, 00:24:13.597 "peer_address": { 00:24:13.597 "trtype": "TCP", 00:24:13.597 "adrfam": "IPv4", 00:24:13.597 "traddr": "10.0.0.1", 00:24:13.597 "trsvcid": "35164" 00:24:13.597 }, 00:24:13.597 "auth": { 00:24:13.597 "state": "completed", 00:24:13.597 "digest": "sha512", 00:24:13.597 "dhgroup": "ffdhe3072" 00:24:13.597 } 00:24:13.597 } 00:24:13.597 ]' 00:24:13.597 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:13.597 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:13.597 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:13.597 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:13.597 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:13.597 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:13.597 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:13.597 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:13.859 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:24:13.859 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:24:14.430 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:14.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:14.430 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:14.430 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.430 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.430 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.430 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:14.430 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:14.430 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:14.690 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:24:14.690 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:14.690 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:14.690 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:14.690 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:14.690 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:14.690 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:14.690 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.690 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.690 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.690 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:14.690 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:14.690 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:14.952 00:24:14.952 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:14.952 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:14.952 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:15.213 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.213 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:15.213 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.213 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.213 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.213 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:15.213 { 00:24:15.213 "cntlid": 119, 00:24:15.213 "qid": 0, 00:24:15.213 "state": "enabled", 00:24:15.213 "thread": "nvmf_tgt_poll_group_000", 00:24:15.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:15.213 "listen_address": { 00:24:15.213 "trtype": "TCP", 00:24:15.213 "adrfam": "IPv4", 00:24:15.213 "traddr": "10.0.0.2", 00:24:15.213 "trsvcid": "4420" 00:24:15.213 }, 00:24:15.213 "peer_address": { 00:24:15.213 "trtype": "TCP", 00:24:15.213 "adrfam": "IPv4", 00:24:15.213 "traddr": "10.0.0.1", 00:24:15.213 "trsvcid": "35194" 00:24:15.213 }, 00:24:15.213 "auth": { 00:24:15.213 "state": "completed", 00:24:15.213 "digest": "sha512", 00:24:15.213 "dhgroup": "ffdhe3072" 00:24:15.213 } 00:24:15.213 } 00:24:15.213 ]' 00:24:15.213 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:15.213 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:15.213 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:15.213 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:15.213 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:15.213 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:15.213 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:15.213 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:15.474 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:24:15.474 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:24:16.199 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:16.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:16.199 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:16.199 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.199 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.199 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.199 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:16.199 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:16.199 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:16.199 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:16.199 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:24:16.199 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:16.199 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:16.199 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:16.199 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:16.199 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:16.199 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.199 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.199 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.199 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.199 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.200 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.200 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.460 00:24:16.460 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:16.460 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:16.460 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:16.722 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.722 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:16.722 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.722 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.722 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.722 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:16.722 { 00:24:16.722 "cntlid": 121, 00:24:16.722 "qid": 0, 00:24:16.722 "state": "enabled", 00:24:16.722 "thread": "nvmf_tgt_poll_group_000", 00:24:16.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:16.722 "listen_address": { 00:24:16.722 "trtype": "TCP", 00:24:16.722 "adrfam": "IPv4", 00:24:16.722 "traddr": "10.0.0.2", 00:24:16.722 "trsvcid": "4420" 00:24:16.722 }, 00:24:16.722 "peer_address": { 00:24:16.722 "trtype": "TCP", 00:24:16.722 "adrfam": "IPv4", 00:24:16.722 "traddr": "10.0.0.1", 00:24:16.722 "trsvcid": "37658" 00:24:16.722 }, 00:24:16.722 "auth": { 00:24:16.722 "state": "completed", 00:24:16.722 "digest": "sha512", 00:24:16.722 "dhgroup": "ffdhe4096" 00:24:16.722 } 00:24:16.722 } 00:24:16.722 ]' 00:24:16.722 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:16.722 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:16.722 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:16.722 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:16.722 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:16.984 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:16.984 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:16.984 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:16.984 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:24:16.984 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:24:17.555 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:17.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.817 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.078 00:24:18.078 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:18.078 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:18.078 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:18.338 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.338 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:18.338 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.338 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.338 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.338 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:18.338 { 00:24:18.338 "cntlid": 123, 00:24:18.338 "qid": 0, 00:24:18.338 "state": "enabled", 00:24:18.338 "thread": "nvmf_tgt_poll_group_000", 00:24:18.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:18.338 "listen_address": { 00:24:18.338 "trtype": "TCP", 00:24:18.338 "adrfam": "IPv4", 00:24:18.338 "traddr": "10.0.0.2", 00:24:18.338 "trsvcid": "4420" 00:24:18.338 }, 00:24:18.338 "peer_address": { 00:24:18.338 "trtype": "TCP", 00:24:18.338 "adrfam": "IPv4", 00:24:18.338 "traddr": "10.0.0.1", 00:24:18.338 "trsvcid": "37684" 00:24:18.338 }, 00:24:18.338 "auth": { 00:24:18.338 "state": "completed", 00:24:18.338 "digest": "sha512", 00:24:18.338 "dhgroup": "ffdhe4096" 00:24:18.338 } 00:24:18.338 } 00:24:18.338 ]' 00:24:18.338 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:18.338 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:18.338 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:18.338 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:18.338 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:18.599 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:18.599 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:18.599 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:18.599 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:24:18.599 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:19.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.539 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.800 00:24:19.800 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:19.800 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:19.801 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:20.062 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.062 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:20.062 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.062 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.062 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.062 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:20.062 { 00:24:20.062 "cntlid": 125, 00:24:20.062 "qid": 0, 00:24:20.062 "state": "enabled", 00:24:20.062 "thread": "nvmf_tgt_poll_group_000", 00:24:20.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:20.062 "listen_address": { 00:24:20.062 "trtype": "TCP", 00:24:20.062 "adrfam": "IPv4", 00:24:20.062 "traddr": "10.0.0.2", 00:24:20.062 "trsvcid": "4420" 00:24:20.062 }, 00:24:20.062 "peer_address": { 00:24:20.062 "trtype": "TCP", 00:24:20.062 "adrfam": "IPv4", 00:24:20.062 "traddr": "10.0.0.1", 00:24:20.062 "trsvcid": "37718" 00:24:20.062 }, 00:24:20.062 "auth": { 00:24:20.062 "state": "completed", 00:24:20.062 "digest": "sha512", 00:24:20.062 "dhgroup": "ffdhe4096" 00:24:20.062 } 00:24:20.062 } 00:24:20.062 ]' 00:24:20.062 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:20.062 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:20.062 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:20.062 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:20.062 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:20.062 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:20.062 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:20.062 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:20.323 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:24:20.323 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:24:20.894 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:20.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:20.894 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:20.894 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.894 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.155 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.155 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:21.155 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:21.156 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:21.156 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:24:21.156 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:21.156 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:21.156 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:21.156 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:21.156 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:21.156 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:21.156 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.156 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.156 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.156 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:21.156 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:21.156 17:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:21.418 00:24:21.418 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:21.418 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:21.418 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:21.679 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.679 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:21.679 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.679 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.679 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.679 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:21.679 { 00:24:21.679 "cntlid": 127, 00:24:21.679 "qid": 0, 00:24:21.679 "state": "enabled", 00:24:21.679 "thread": "nvmf_tgt_poll_group_000", 00:24:21.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:21.679 "listen_address": { 00:24:21.679 "trtype": "TCP", 00:24:21.679 "adrfam": "IPv4", 00:24:21.679 "traddr": "10.0.0.2", 00:24:21.679 "trsvcid": "4420" 00:24:21.679 }, 00:24:21.679 "peer_address": { 00:24:21.679 "trtype": "TCP", 00:24:21.679 "adrfam": "IPv4", 00:24:21.679 "traddr": "10.0.0.1", 00:24:21.679 "trsvcid": "37744" 00:24:21.679 }, 00:24:21.679 "auth": { 00:24:21.679 "state": "completed", 00:24:21.679 "digest": "sha512", 00:24:21.679 "dhgroup": "ffdhe4096" 00:24:21.679 } 00:24:21.679 } 00:24:21.679 ]' 00:24:21.679 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:21.679 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:21.679 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:21.679 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:21.679 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:21.679 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:21.679 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:21.679 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:21.939 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:24:21.939 17:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:24:22.508 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:22.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:22.508 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:22.508 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.508 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.769 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:23.031 00:24:23.031 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:23.031 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:23.031 17:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:23.292 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.292 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:23.292 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.292 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.292 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.292 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:23.292 { 00:24:23.292 "cntlid": 129, 00:24:23.292 "qid": 0, 00:24:23.292 "state": "enabled", 00:24:23.292 "thread": "nvmf_tgt_poll_group_000", 00:24:23.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:23.292 "listen_address": { 00:24:23.292 "trtype": "TCP", 00:24:23.292 "adrfam": "IPv4", 00:24:23.292 "traddr": "10.0.0.2", 00:24:23.292 "trsvcid": "4420" 00:24:23.292 }, 00:24:23.292 "peer_address": { 00:24:23.292 "trtype": "TCP", 00:24:23.292 "adrfam": "IPv4", 00:24:23.292 "traddr": "10.0.0.1", 00:24:23.292 "trsvcid": "37774" 00:24:23.292 }, 00:24:23.292 "auth": { 00:24:23.292 "state": "completed", 00:24:23.292 "digest": "sha512", 00:24:23.292 "dhgroup": "ffdhe6144" 00:24:23.292 } 00:24:23.292 } 00:24:23.292 ]' 00:24:23.292 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:23.292 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:23.292 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:23.552 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:23.552 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:23.552 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:23.552 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:23.552 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:23.552 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:24:23.552 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:24:24.494 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:24.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.495 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.755 00:24:25.015 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:25.015 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:25.015 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:25.015 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.015 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:25.015 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.015 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.015 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.015 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:25.015 { 00:24:25.015 "cntlid": 131, 00:24:25.015 "qid": 0, 00:24:25.015 "state": "enabled", 00:24:25.015 "thread": "nvmf_tgt_poll_group_000", 00:24:25.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:25.015 "listen_address": { 00:24:25.015 "trtype": "TCP", 00:24:25.015 "adrfam": "IPv4", 00:24:25.015 "traddr": "10.0.0.2", 00:24:25.015 "trsvcid": "4420" 00:24:25.015 }, 00:24:25.015 "peer_address": { 00:24:25.015 "trtype": "TCP", 00:24:25.015 "adrfam": "IPv4", 00:24:25.015 "traddr": "10.0.0.1", 00:24:25.015 "trsvcid": "37806" 00:24:25.015 }, 00:24:25.015 "auth": { 00:24:25.015 "state": "completed", 00:24:25.016 "digest": "sha512", 00:24:25.016 "dhgroup": "ffdhe6144" 00:24:25.016 } 00:24:25.016 } 00:24:25.016 ]' 00:24:25.016 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:25.016 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:25.016 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:25.276 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:25.276 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:25.276 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:25.276 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:25.276 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:25.276 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:24:25.276 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:24:26.217 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:26.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:26.217 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:26.217 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.217 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.217 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.217 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:26.217 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:26.217 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:26.217 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:24:26.217 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:26.217 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:26.217 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:26.217 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:26.217 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:26.217 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:26.217 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.217 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.217 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.217 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:26.217 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:26.217 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:26.479 00:24:26.479 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:26.479 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:26.479 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:26.739 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.739 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:26.739 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.739 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.739 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.739 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:26.739 { 00:24:26.739 "cntlid": 133, 00:24:26.739 "qid": 0, 00:24:26.739 "state": "enabled", 00:24:26.739 "thread": "nvmf_tgt_poll_group_000", 00:24:26.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:26.739 "listen_address": { 00:24:26.739 "trtype": "TCP", 00:24:26.739 "adrfam": "IPv4", 00:24:26.739 "traddr": "10.0.0.2", 00:24:26.739 "trsvcid": "4420" 00:24:26.739 }, 00:24:26.739 "peer_address": { 00:24:26.739 "trtype": "TCP", 00:24:26.739 "adrfam": "IPv4", 00:24:26.739 "traddr": "10.0.0.1", 00:24:26.739 "trsvcid": "33660" 00:24:26.739 }, 00:24:26.739 "auth": { 00:24:26.739 "state": "completed", 00:24:26.739 "digest": "sha512", 00:24:26.739 "dhgroup": "ffdhe6144" 00:24:26.739 } 00:24:26.739 } 00:24:26.739 ]' 00:24:26.739 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:26.739 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:26.739 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:27.000 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:27.000 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:27.000 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:27.000 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:27.000 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:27.000 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:24:27.000 17:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:27.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:27.940 17:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:28.201 00:24:28.201 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:28.201 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:28.201 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:28.461 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.461 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:28.461 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.461 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.461 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.461 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:28.461 { 00:24:28.461 "cntlid": 135, 00:24:28.461 "qid": 0, 00:24:28.461 "state": "enabled", 00:24:28.461 "thread": "nvmf_tgt_poll_group_000", 00:24:28.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:28.461 "listen_address": { 00:24:28.461 "trtype": "TCP", 00:24:28.461 "adrfam": "IPv4", 00:24:28.461 "traddr": "10.0.0.2", 00:24:28.461 "trsvcid": "4420" 00:24:28.461 }, 00:24:28.461 "peer_address": { 00:24:28.461 "trtype": "TCP", 00:24:28.461 "adrfam": "IPv4", 00:24:28.461 "traddr": "10.0.0.1", 00:24:28.461 "trsvcid": "33688" 00:24:28.461 }, 00:24:28.461 "auth": { 00:24:28.461 "state": "completed", 00:24:28.461 "digest": "sha512", 00:24:28.461 "dhgroup": "ffdhe6144" 00:24:28.461 } 00:24:28.461 } 00:24:28.461 ]' 00:24:28.461 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:28.461 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:28.461 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:28.722 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:28.722 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:28.722 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:28.722 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:28.722 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:28.722 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:24:28.722 17:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:29.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.663 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:30.234 00:24:30.234 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:30.234 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:30.234 17:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:30.234 17:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.234 17:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:30.234 17:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.234 17:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.234 17:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.234 17:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:30.234 { 00:24:30.234 "cntlid": 137, 00:24:30.234 "qid": 0, 00:24:30.234 "state": "enabled", 00:24:30.234 "thread": "nvmf_tgt_poll_group_000", 00:24:30.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:30.234 "listen_address": { 00:24:30.234 "trtype": "TCP", 00:24:30.234 "adrfam": "IPv4", 00:24:30.234 "traddr": "10.0.0.2", 00:24:30.234 "trsvcid": "4420" 00:24:30.234 }, 00:24:30.234 "peer_address": { 00:24:30.234 "trtype": "TCP", 00:24:30.235 "adrfam": "IPv4", 00:24:30.235 "traddr": "10.0.0.1", 00:24:30.235 "trsvcid": "33708" 00:24:30.235 }, 00:24:30.235 "auth": { 00:24:30.235 "state": "completed", 00:24:30.235 "digest": "sha512", 00:24:30.235 "dhgroup": "ffdhe8192" 00:24:30.235 } 00:24:30.235 } 00:24:30.235 ]' 00:24:30.235 17:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:30.495 17:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:30.495 17:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:30.495 17:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:30.495 17:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:30.495 17:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:30.495 17:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:30.495 17:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:30.755 17:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:24:30.755 17:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:24:31.326 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:31.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:31.326 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.326 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.326 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.326 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.326 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:31.326 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:31.326 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:31.586 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:24:31.586 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:31.586 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:31.586 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:31.586 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:31.586 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:31.586 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.586 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.586 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.586 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.586 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.586 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.586 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.161 00:24:32.161 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:32.161 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:32.161 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:32.161 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.161 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:32.161 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.161 17:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.161 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.161 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:32.161 { 00:24:32.161 "cntlid": 139, 00:24:32.161 "qid": 0, 00:24:32.161 "state": "enabled", 00:24:32.161 "thread": "nvmf_tgt_poll_group_000", 00:24:32.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:32.161 "listen_address": { 00:24:32.161 "trtype": "TCP", 00:24:32.161 "adrfam": "IPv4", 00:24:32.161 "traddr": "10.0.0.2", 00:24:32.161 "trsvcid": "4420" 00:24:32.161 }, 00:24:32.161 "peer_address": { 00:24:32.161 "trtype": "TCP", 00:24:32.161 "adrfam": "IPv4", 00:24:32.161 "traddr": "10.0.0.1", 00:24:32.161 "trsvcid": "33752" 00:24:32.161 }, 00:24:32.161 "auth": { 00:24:32.161 "state": "completed", 00:24:32.161 "digest": "sha512", 00:24:32.161 "dhgroup": "ffdhe8192" 00:24:32.161 } 00:24:32.161 } 00:24:32.161 ]' 00:24:32.161 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:32.161 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:32.161 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:32.423 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:32.423 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:32.423 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:32.423 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:32.423 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:32.423 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:24:32.423 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: --dhchap-ctrl-secret DHHC-1:02:YzY3NjNhZjMxYzI3NjgyMzRlMjI0OGVjNWQ2ZjE3ZTA0MzI1ZWVhZTFlNzk3MjZlCoD8Zw==: 00:24:33.365 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:33.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:33.365 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:33.365 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.365 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.365 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.365 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:33.365 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:33.365 17:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:33.365 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:24:33.365 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:33.365 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:33.365 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:33.365 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:33.365 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:33.365 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.365 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.365 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.365 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.365 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.365 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.366 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.937 00:24:33.937 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:33.937 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:33.937 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:33.937 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.937 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:33.937 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.937 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.937 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.937 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:33.937 { 00:24:33.937 "cntlid": 141, 00:24:33.937 "qid": 0, 00:24:33.937 "state": "enabled", 00:24:33.937 "thread": "nvmf_tgt_poll_group_000", 00:24:33.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:33.937 "listen_address": { 00:24:33.937 "trtype": "TCP", 00:24:33.937 "adrfam": "IPv4", 00:24:33.937 "traddr": "10.0.0.2", 00:24:33.937 "trsvcid": "4420" 00:24:33.937 }, 00:24:33.937 "peer_address": { 00:24:33.937 "trtype": "TCP", 00:24:33.937 "adrfam": "IPv4", 00:24:33.937 "traddr": "10.0.0.1", 00:24:33.937 "trsvcid": "33784" 00:24:33.937 }, 00:24:33.937 "auth": { 00:24:33.937 "state": "completed", 00:24:33.937 "digest": "sha512", 00:24:33.937 "dhgroup": "ffdhe8192" 00:24:33.937 } 00:24:33.937 } 00:24:33.937 ]' 00:24:33.937 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:34.197 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:34.197 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:34.197 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:34.197 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:34.197 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:34.197 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:34.197 17:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:34.458 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:24:34.458 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:01:YzYyZDNiNmJiNDU2ZjNlZGMyOGM5MTU2YmRkN2FkMWQ1+oWe: 00:24:35.029 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:35.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:35.029 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:35.029 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.029 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.029 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.029 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:35.029 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:35.029 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:35.290 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:24:35.290 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:35.290 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:35.290 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:35.290 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:35.290 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:35.290 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:35.290 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.290 17:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.290 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.290 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:35.290 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:35.290 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:35.551 00:24:35.812 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:35.813 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:35.813 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:35.813 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.813 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:35.813 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.813 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.813 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.813 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:35.813 { 00:24:35.813 "cntlid": 143, 00:24:35.813 "qid": 0, 00:24:35.813 "state": "enabled", 00:24:35.813 "thread": "nvmf_tgt_poll_group_000", 00:24:35.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:35.813 "listen_address": { 00:24:35.813 "trtype": "TCP", 00:24:35.813 "adrfam": "IPv4", 00:24:35.813 "traddr": "10.0.0.2", 00:24:35.813 "trsvcid": "4420" 00:24:35.813 }, 00:24:35.813 "peer_address": { 00:24:35.813 "trtype": "TCP", 00:24:35.813 "adrfam": "IPv4", 00:24:35.813 "traddr": "10.0.0.1", 00:24:35.813 "trsvcid": "33312" 00:24:35.813 }, 00:24:35.813 "auth": { 00:24:35.813 "state": "completed", 00:24:35.813 "digest": "sha512", 00:24:35.813 "dhgroup": "ffdhe8192" 00:24:35.813 } 00:24:35.813 } 00:24:35.813 ]' 00:24:35.813 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:35.813 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:35.813 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:36.074 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:36.074 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:36.074 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:36.074 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:36.074 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:36.335 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:24:36.335 17:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:24:36.906 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:36.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:36.906 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:36.906 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.906 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.906 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.906 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:24:36.906 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:24:36.907 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:24:36.907 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:36.907 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:36.907 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:36.907 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:24:37.167 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:37.167 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:37.167 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:37.167 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:37.167 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:37.167 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:37.167 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.167 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.167 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.167 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:37.167 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:37.167 17:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:37.429 00:24:37.429 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:37.429 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:37.429 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:37.690 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.690 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:37.690 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.690 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.690 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.690 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:37.690 { 00:24:37.690 "cntlid": 145, 00:24:37.690 "qid": 0, 00:24:37.690 "state": "enabled", 00:24:37.690 "thread": "nvmf_tgt_poll_group_000", 00:24:37.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:37.690 "listen_address": { 00:24:37.690 "trtype": "TCP", 00:24:37.690 "adrfam": "IPv4", 00:24:37.690 "traddr": "10.0.0.2", 00:24:37.690 "trsvcid": "4420" 00:24:37.690 }, 00:24:37.690 "peer_address": { 00:24:37.690 "trtype": "TCP", 00:24:37.690 "adrfam": "IPv4", 00:24:37.690 "traddr": "10.0.0.1", 00:24:37.690 "trsvcid": "33340" 00:24:37.690 }, 00:24:37.690 "auth": { 00:24:37.690 "state": "completed", 00:24:37.690 "digest": "sha512", 00:24:37.690 "dhgroup": "ffdhe8192" 00:24:37.690 } 00:24:37.690 } 00:24:37.690 ]' 00:24:37.690 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:37.690 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:37.690 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:37.690 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:37.690 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:37.950 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:37.950 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:37.950 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:37.950 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:24:37.950 17:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTQ2Yzk4YzFmMzJkNmFhNDUyNTQ3ZDQ1NTRjMmIzOGY5MzNhZGI4MTM4NDE2NWZldpFlLw==: --dhchap-ctrl-secret DHHC-1:03:N2VkZTJjOTVjMzFlNjczNzZiOTUxMmVlZWRkMzA3NzQzNTE2NjRjZmE1YjFlMjk0MzhlNDMxOTg5NmEzMTM2OZiz4ws=: 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:38.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:24:38.891 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:24:39.152 request: 00:24:39.152 { 00:24:39.152 "name": "nvme0", 00:24:39.152 "trtype": "tcp", 00:24:39.152 "traddr": "10.0.0.2", 00:24:39.152 "adrfam": "ipv4", 00:24:39.152 "trsvcid": "4420", 00:24:39.152 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:39.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:39.152 "prchk_reftag": false, 00:24:39.152 "prchk_guard": false, 00:24:39.152 "hdgst": false, 00:24:39.152 "ddgst": false, 00:24:39.152 "dhchap_key": "key2", 00:24:39.152 "allow_unrecognized_csi": false, 00:24:39.152 "method": "bdev_nvme_attach_controller", 00:24:39.152 "req_id": 1 00:24:39.152 } 00:24:39.152 Got JSON-RPC error response 00:24:39.152 response: 00:24:39.152 { 00:24:39.152 "code": -5, 00:24:39.152 "message": "Input/output error" 00:24:39.152 } 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:39.152 17:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:39.723 request: 00:24:39.723 { 00:24:39.723 "name": "nvme0", 00:24:39.723 "trtype": "tcp", 00:24:39.723 "traddr": "10.0.0.2", 00:24:39.723 "adrfam": "ipv4", 00:24:39.723 "trsvcid": "4420", 00:24:39.723 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:39.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:39.723 "prchk_reftag": false, 00:24:39.723 "prchk_guard": false, 00:24:39.723 "hdgst": false, 00:24:39.723 "ddgst": false, 00:24:39.723 "dhchap_key": "key1", 00:24:39.723 "dhchap_ctrlr_key": "ckey2", 00:24:39.723 "allow_unrecognized_csi": false, 00:24:39.723 "method": "bdev_nvme_attach_controller", 00:24:39.723 "req_id": 1 00:24:39.723 } 00:24:39.723 Got JSON-RPC error response 00:24:39.723 response: 00:24:39.723 { 00:24:39.723 "code": -5, 00:24:39.723 "message": "Input/output error" 00:24:39.723 } 00:24:39.723 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:39.723 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:39.723 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:39.723 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:39.723 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:39.723 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.723 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.724 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.724 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:24:39.724 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.724 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.724 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.724 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.724 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:39.724 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.724 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:39.724 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:39.724 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:39.724 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:39.724 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.724 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.724 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.986 request: 00:24:39.986 { 00:24:39.986 "name": "nvme0", 00:24:39.986 "trtype": "tcp", 00:24:39.986 "traddr": "10.0.0.2", 00:24:39.986 "adrfam": "ipv4", 00:24:39.986 "trsvcid": "4420", 00:24:39.986 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:39.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:39.986 "prchk_reftag": false, 00:24:39.986 "prchk_guard": false, 00:24:39.986 "hdgst": false, 00:24:39.986 "ddgst": false, 00:24:39.986 "dhchap_key": "key1", 00:24:39.986 "dhchap_ctrlr_key": "ckey1", 00:24:39.986 "allow_unrecognized_csi": false, 00:24:39.986 "method": "bdev_nvme_attach_controller", 00:24:39.986 "req_id": 1 00:24:39.986 } 00:24:39.986 Got JSON-RPC error response 00:24:39.986 response: 00:24:39.986 { 00:24:39.986 "code": -5, 00:24:39.986 "message": "Input/output error" 00:24:39.986 } 00:24:39.986 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:39.986 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:39.986 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:39.986 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:39.986 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:39.986 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.986 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.986 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.986 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2675284 00:24:39.986 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2675284 ']' 00:24:39.986 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2675284 00:24:39.986 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:24:39.986 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:39.986 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2675284 00:24:40.246 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:40.246 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:40.246 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2675284' 00:24:40.246 killing process with pid 2675284 00:24:40.246 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2675284 00:24:40.246 17:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2675284 00:24:40.246 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:24:40.246 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:40.246 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:40.246 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.246 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=2701190 00:24:40.246 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 2701190 00:24:40.246 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:24:40.246 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2701190 ']' 00:24:40.246 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.246 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:40.246 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.246 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:40.246 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.187 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:41.187 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:24:41.187 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:41.187 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:41.187 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.187 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.187 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:24:41.187 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2701190 00:24:41.187 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2701190 ']' 00:24:41.187 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.187 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:41.187 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.187 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:41.187 17:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.448 null0 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.U5E 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.yhC ]] 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yhC 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.idm 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.tEP ]] 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tEP 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.sp1 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.YDn ]] 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YDn 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.wwr 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:41.448 17:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:42.390 nvme0n1 00:24:42.390 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:42.390 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:42.390 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:42.390 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.390 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:42.390 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.390 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.390 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.390 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:42.390 { 00:24:42.390 "cntlid": 1, 00:24:42.390 "qid": 0, 00:24:42.390 "state": "enabled", 00:24:42.390 "thread": "nvmf_tgt_poll_group_000", 00:24:42.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:42.390 "listen_address": { 00:24:42.390 "trtype": "TCP", 00:24:42.390 "adrfam": "IPv4", 00:24:42.390 "traddr": "10.0.0.2", 00:24:42.390 "trsvcid": "4420" 00:24:42.390 }, 00:24:42.390 "peer_address": { 00:24:42.390 "trtype": "TCP", 00:24:42.390 "adrfam": "IPv4", 00:24:42.390 "traddr": "10.0.0.1", 00:24:42.390 "trsvcid": "33404" 00:24:42.390 }, 00:24:42.390 "auth": { 00:24:42.390 "state": "completed", 00:24:42.390 "digest": "sha512", 00:24:42.390 "dhgroup": "ffdhe8192" 00:24:42.390 } 00:24:42.390 } 00:24:42.390 ]' 00:24:42.390 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:42.650 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:42.650 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:42.650 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:42.650 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:42.650 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:42.650 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:42.650 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:42.910 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:24:42.910 17:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:24:43.478 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:43.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:43.478 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:43.478 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.478 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.478 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.478 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:24:43.478 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.478 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.478 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.478 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:24:43.478 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:24:43.737 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:43.737 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:43.737 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:43.737 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:43.737 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:43.737 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:43.737 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:43.737 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:43.737 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:43.737 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:43.737 request: 00:24:43.737 { 00:24:43.738 "name": "nvme0", 00:24:43.738 "trtype": "tcp", 00:24:43.738 "traddr": "10.0.0.2", 00:24:43.738 "adrfam": "ipv4", 00:24:43.738 "trsvcid": "4420", 00:24:43.738 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:43.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:43.738 "prchk_reftag": false, 00:24:43.738 "prchk_guard": false, 00:24:43.738 "hdgst": false, 00:24:43.738 "ddgst": false, 00:24:43.738 "dhchap_key": "key3", 00:24:43.738 "allow_unrecognized_csi": false, 00:24:43.738 "method": "bdev_nvme_attach_controller", 00:24:43.738 "req_id": 1 00:24:43.738 } 00:24:43.738 Got JSON-RPC error response 00:24:43.738 response: 00:24:43.738 { 00:24:43.738 "code": -5, 00:24:43.738 "message": "Input/output error" 00:24:43.738 } 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:43.998 17:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:44.258 request: 00:24:44.258 { 00:24:44.258 "name": "nvme0", 00:24:44.258 "trtype": "tcp", 00:24:44.258 "traddr": "10.0.0.2", 00:24:44.258 "adrfam": "ipv4", 00:24:44.258 "trsvcid": "4420", 00:24:44.258 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:44.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:44.258 "prchk_reftag": false, 00:24:44.258 "prchk_guard": false, 00:24:44.258 "hdgst": false, 00:24:44.258 "ddgst": false, 00:24:44.258 "dhchap_key": "key3", 00:24:44.258 "allow_unrecognized_csi": false, 00:24:44.258 "method": "bdev_nvme_attach_controller", 00:24:44.258 "req_id": 1 00:24:44.258 } 00:24:44.258 Got JSON-RPC error response 00:24:44.258 response: 00:24:44.258 { 00:24:44.258 "code": -5, 00:24:44.258 "message": "Input/output error" 00:24:44.258 } 00:24:44.258 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:44.258 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:44.258 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:44.258 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:44.258 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:44.258 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:24:44.258 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:44.258 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:44.258 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:44.258 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:44.519 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:44.779 request: 00:24:44.779 { 00:24:44.779 "name": "nvme0", 00:24:44.779 "trtype": "tcp", 00:24:44.779 "traddr": "10.0.0.2", 00:24:44.779 "adrfam": "ipv4", 00:24:44.779 "trsvcid": "4420", 00:24:44.779 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:44.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:44.779 "prchk_reftag": false, 00:24:44.779 "prchk_guard": false, 00:24:44.779 "hdgst": false, 00:24:44.779 "ddgst": false, 00:24:44.779 "dhchap_key": "key0", 00:24:44.779 "dhchap_ctrlr_key": "key1", 00:24:44.779 "allow_unrecognized_csi": false, 00:24:44.779 "method": "bdev_nvme_attach_controller", 00:24:44.779 "req_id": 1 00:24:44.779 } 00:24:44.779 Got JSON-RPC error response 00:24:44.779 response: 00:24:44.779 { 00:24:44.779 "code": -5, 00:24:44.779 "message": "Input/output error" 00:24:44.779 } 00:24:44.779 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:44.779 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:44.779 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:44.779 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:44.779 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:24:44.779 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:44.779 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:45.040 nvme0n1 00:24:45.040 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:24:45.040 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:24:45.040 17:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:45.300 17:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.300 17:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:45.300 17:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:45.300 17:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:24:45.300 17:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.300 17:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.300 17:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.300 17:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:45.300 17:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:45.300 17:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:46.240 nvme0n1 00:24:46.240 17:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:24:46.240 17:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:24:46.240 17:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:46.240 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.240 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:46.240 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.240 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.240 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.240 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:24:46.240 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:46.240 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:24:46.501 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.501 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:24:46.501 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: --dhchap-ctrl-secret DHHC-1:03:Njc5YzQxNmYxZWJjMWI5ZjBkODZiYzY1NjIzMTg1NzBlMzU0NWU4OGUzMTc0ZDQ4YzM5MDk3NjQ1ODAzMjc4OcgZ5I4=: 00:24:47.070 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:24:47.070 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:24:47.070 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:24:47.070 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:24:47.070 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:24:47.070 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:24:47.070 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:24:47.070 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:47.070 17:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:47.331 17:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:24:47.331 17:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:47.331 17:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:24:47.331 17:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:47.331 17:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:47.331 17:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:47.331 17:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:47.331 17:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:47.331 17:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:47.331 17:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:47.902 request: 00:24:47.902 { 00:24:47.902 "name": "nvme0", 00:24:47.902 "trtype": "tcp", 00:24:47.902 "traddr": "10.0.0.2", 00:24:47.902 "adrfam": "ipv4", 00:24:47.902 "trsvcid": "4420", 00:24:47.902 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:47.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:24:47.902 "prchk_reftag": false, 00:24:47.902 "prchk_guard": false, 00:24:47.902 "hdgst": false, 00:24:47.902 "ddgst": false, 00:24:47.902 "dhchap_key": "key1", 00:24:47.902 "allow_unrecognized_csi": false, 00:24:47.902 "method": "bdev_nvme_attach_controller", 00:24:47.902 "req_id": 1 00:24:47.902 } 00:24:47.902 Got JSON-RPC error response 00:24:47.902 response: 00:24:47.902 { 00:24:47.902 "code": -5, 00:24:47.902 "message": "Input/output error" 00:24:47.902 } 00:24:47.902 17:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:47.902 17:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:47.902 17:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:47.902 17:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:47.902 17:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:47.902 17:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:47.902 17:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:48.472 nvme0n1 00:24:48.472 17:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:24:48.472 17:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:24:48.472 17:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:48.732 17:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.732 17:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:48.732 17:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:48.992 17:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:48.992 17:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.992 17:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.992 17:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.992 17:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:24:48.992 17:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:48.992 17:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:49.253 nvme0n1 00:24:49.253 17:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:24:49.253 17:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:24:49.253 17:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:49.253 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.253 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:49.253 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:49.513 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:49.513 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.513 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.513 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.513 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: '' 2s 00:24:49.513 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:49.513 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:49.513 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: 00:24:49.513 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:24:49.513 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:49.513 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:49.513 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: ]] 00:24:49.513 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NzU0NWY4NGVmYjYxNzk5NjRkNGQzNWQ1YzViMDU5NGFVi/43: 00:24:49.513 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:24:49.513 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:49.513 17:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: 2s 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:24:52.054 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: ]] 00:24:52.055 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NjYyMTcxODgyYzdmMDNkNWY3YzQyNWFjOTYwZjNkYjFkOTM2M2Y5NjkzYmY1NDBhbnkoSg==: 00:24:52.055 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:52.055 17:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:53.974 17:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:24:53.974 17:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:24:53.974 17:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:24:53.974 17:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:24:53.974 17:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:24:53.974 17:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:24:53.974 17:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:24:53.974 17:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:53.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:53.974 17:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:53.974 17:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.974 17:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.974 17:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.974 17:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:53.974 17:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:53.974 17:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:54.544 nvme0n1 00:24:54.544 17:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:54.544 17:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.544 17:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:54.544 17:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.544 17:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:54.544 17:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:54.805 17:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:24:54.805 17:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:24:54.805 17:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:55.066 17:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.066 17:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:55.066 17:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.066 17:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.066 17:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.066 17:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:24:55.066 17:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:24:55.327 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:24:55.327 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:55.327 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:24:55.327 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.327 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:55.327 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.327 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.327 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.327 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:55.327 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:55.327 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:55.327 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:24:55.587 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.587 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:24:55.587 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.587 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:55.587 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:55.847 request: 00:24:55.847 { 00:24:55.847 "name": "nvme0", 00:24:55.847 "dhchap_key": "key1", 00:24:55.847 "dhchap_ctrlr_key": "key3", 00:24:55.847 "method": "bdev_nvme_set_keys", 00:24:55.847 "req_id": 1 00:24:55.847 } 00:24:55.847 Got JSON-RPC error response 00:24:55.847 response: 00:24:55.847 { 00:24:55.847 "code": -13, 00:24:55.847 "message": "Permission denied" 00:24:55.847 } 00:24:55.847 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:55.847 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:55.847 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:55.847 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:55.847 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:55.847 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:55.847 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:56.107 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:24:56.107 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:24:57.047 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:57.047 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:57.047 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:57.307 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:24:57.307 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:57.307 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.307 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.307 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.307 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:57.307 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:57.307 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:57.877 nvme0n1 00:24:58.138 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:58.138 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.138 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:58.138 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.138 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:58.138 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:58.138 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:58.138 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:24:58.138 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:58.138 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:24:58.138 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:58.138 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:58.138 17:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:58.398 request: 00:24:58.398 { 00:24:58.398 "name": "nvme0", 00:24:58.398 "dhchap_key": "key2", 00:24:58.398 "dhchap_ctrlr_key": "key0", 00:24:58.398 "method": "bdev_nvme_set_keys", 00:24:58.398 "req_id": 1 00:24:58.398 } 00:24:58.398 Got JSON-RPC error response 00:24:58.398 response: 00:24:58.398 { 00:24:58.398 "code": -13, 00:24:58.398 "message": "Permission denied" 00:24:58.398 } 00:24:58.398 17:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:58.399 17:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:58.399 17:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:58.399 17:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:58.399 17:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:58.399 17:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:58.399 17:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:58.659 17:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:24:58.659 17:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:24:59.600 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:59.600 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:59.600 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:59.861 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:24:59.861 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:24:59.861 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:24:59.861 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2675436 00:24:59.861 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2675436 ']' 00:24:59.861 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2675436 00:24:59.861 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:24:59.861 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:59.861 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2675436 00:24:59.861 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:59.861 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:59.861 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2675436' 00:24:59.861 killing process with pid 2675436 00:24:59.861 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2675436 00:24:59.861 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2675436 00:25:00.123 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:25:00.123 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:00.123 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:25:00.123 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.123 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:25:00.123 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.123 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.123 rmmod nvme_tcp 00:25:00.123 rmmod nvme_fabrics 00:25:00.123 rmmod nvme_keyring 00:25:00.123 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:00.123 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:25:00.123 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:25:00.123 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 2701190 ']' 00:25:00.123 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 2701190 00:25:00.123 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2701190 ']' 00:25:00.123 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2701190 00:25:00.123 17:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:25:00.123 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:00.123 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2701190 00:25:00.384 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:00.384 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:00.384 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2701190' 00:25:00.384 killing process with pid 2701190 00:25:00.384 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2701190 00:25:00.384 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2701190 00:25:00.384 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:00.384 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:00.384 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:00.384 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:25:00.384 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:25:00.384 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:00.384 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:25:00.384 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:00.384 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:00.384 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.384 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.384 17:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.U5E /tmp/spdk.key-sha256.idm /tmp/spdk.key-sha384.sp1 /tmp/spdk.key-sha512.wwr /tmp/spdk.key-sha512.yhC /tmp/spdk.key-sha384.tEP /tmp/spdk.key-sha256.YDn '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:25:02.935 00:25:02.935 real 2m36.902s 00:25:02.935 user 5m52.872s 00:25:02.935 sys 0m24.819s 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:02.935 ************************************ 00:25:02.935 END TEST nvmf_auth_target 00:25:02.935 ************************************ 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:02.935 ************************************ 00:25:02.935 START TEST nvmf_bdevio_no_huge 00:25:02.935 ************************************ 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:25:02.935 * Looking for test storage... 00:25:02.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:25:02.935 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:02.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.936 --rc genhtml_branch_coverage=1 00:25:02.936 --rc genhtml_function_coverage=1 00:25:02.936 --rc genhtml_legend=1 00:25:02.936 --rc geninfo_all_blocks=1 00:25:02.936 --rc geninfo_unexecuted_blocks=1 00:25:02.936 00:25:02.936 ' 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:02.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.936 --rc genhtml_branch_coverage=1 00:25:02.936 --rc genhtml_function_coverage=1 00:25:02.936 --rc genhtml_legend=1 00:25:02.936 --rc geninfo_all_blocks=1 00:25:02.936 --rc geninfo_unexecuted_blocks=1 00:25:02.936 00:25:02.936 ' 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:02.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.936 --rc genhtml_branch_coverage=1 00:25:02.936 --rc genhtml_function_coverage=1 00:25:02.936 --rc genhtml_legend=1 00:25:02.936 --rc geninfo_all_blocks=1 00:25:02.936 --rc geninfo_unexecuted_blocks=1 00:25:02.936 00:25:02.936 ' 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:02.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.936 --rc genhtml_branch_coverage=1 00:25:02.936 --rc genhtml_function_coverage=1 00:25:02.936 --rc genhtml_legend=1 00:25:02.936 --rc geninfo_all_blocks=1 00:25:02.936 --rc geninfo_unexecuted_blocks=1 00:25:02.936 00:25:02.936 ' 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:02.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:25:02.936 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:11.087 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:11.087 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:11.087 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:11.087 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # is_hw=yes 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:11.087 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.088 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:11.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:25:11.088 00:25:11.088 --- 10.0.0.2 ping statistics --- 00:25:11.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.088 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:25:11.088 00:25:11.088 --- 10.0.0.1 ping statistics --- 00:25:11.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.088 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # return 0 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=2709257 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 2709257 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 2709257 ']' 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:11.088 [2024-11-20 17:52:10.138044] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:11.088 [2024-11-20 17:52:10.138114] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:25:11.088 [2024-11-20 17:52:10.229516] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:11.088 [2024-11-20 17:52:10.311862] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.088 [2024-11-20 17:52:10.311912] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.088 [2024-11-20 17:52:10.311920] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.088 [2024-11-20 17:52:10.311927] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.088 [2024-11-20 17:52:10.311933] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.088 [2024-11-20 17:52:10.312089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:25:11.088 [2024-11-20 17:52:10.312228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:25:11.088 [2024-11-20 17:52:10.312395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:25:11.088 [2024-11-20 17:52:10.312396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:11.088 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:11.350 [2024-11-20 17:52:11.020000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:11.350 Malloc0 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:11.350 [2024-11-20 17:52:11.073884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:11.350 { 00:25:11.350 "params": { 00:25:11.350 "name": "Nvme$subsystem", 00:25:11.350 "trtype": "$TEST_TRANSPORT", 00:25:11.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.350 "adrfam": "ipv4", 00:25:11.350 "trsvcid": "$NVMF_PORT", 00:25:11.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.350 "hdgst": ${hdgst:-false}, 00:25:11.350 "ddgst": ${ddgst:-false} 00:25:11.350 }, 00:25:11.350 "method": "bdev_nvme_attach_controller" 00:25:11.350 } 00:25:11.350 EOF 00:25:11.350 )") 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:25:11.350 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:25:11.350 "params": { 00:25:11.350 "name": "Nvme1", 00:25:11.350 "trtype": "tcp", 00:25:11.350 "traddr": "10.0.0.2", 00:25:11.350 "adrfam": "ipv4", 00:25:11.350 "trsvcid": "4420", 00:25:11.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:11.350 "hdgst": false, 00:25:11.350 "ddgst": false 00:25:11.350 }, 00:25:11.350 "method": "bdev_nvme_attach_controller" 00:25:11.350 }' 00:25:11.350 [2024-11-20 17:52:11.130325] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:11.350 [2024-11-20 17:52:11.130404] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2709507 ] 00:25:11.350 [2024-11-20 17:52:11.213483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:11.611 [2024-11-20 17:52:11.293705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.611 [2024-11-20 17:52:11.293865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.611 [2024-11-20 17:52:11.293865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.611 I/O targets: 00:25:11.611 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:25:11.611 00:25:11.611 00:25:11.611 CUnit - A unit testing framework for C - Version 2.1-3 00:25:11.611 http://cunit.sourceforge.net/ 00:25:11.611 00:25:11.611 00:25:11.611 Suite: bdevio tests on: Nvme1n1 00:25:11.611 Test: blockdev write read block ...passed 00:25:11.873 Test: blockdev write zeroes read block ...passed 00:25:11.873 Test: blockdev write zeroes read no split ...passed 00:25:11.873 Test: blockdev write zeroes read split ...passed 00:25:11.873 Test: blockdev write zeroes read split partial ...passed 00:25:11.873 Test: blockdev reset ...[2024-11-20 17:52:11.648769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.873 [2024-11-20 17:52:11.648879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4fa90 (9): Bad file descriptor 00:25:11.873 [2024-11-20 17:52:11.703463] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:11.873 passed 00:25:11.873 Test: blockdev write read 8 blocks ...passed 00:25:11.873 Test: blockdev write read size > 128k ...passed 00:25:11.873 Test: blockdev write read invalid size ...passed 00:25:12.134 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:12.134 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:12.134 Test: blockdev write read max offset ...passed 00:25:12.134 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:12.134 Test: blockdev writev readv 8 blocks ...passed 00:25:12.134 Test: blockdev writev readv 30 x 1block ...passed 00:25:12.134 Test: blockdev writev readv block ...passed 00:25:12.134 Test: blockdev writev readv size > 128k ...passed 00:25:12.134 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:12.134 Test: blockdev comparev and writev ...[2024-11-20 17:52:11.928977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:12.134 [2024-11-20 17:52:11.929036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:12.134 [2024-11-20 17:52:11.929054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:12.134 [2024-11-20 17:52:11.929063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:12.134 [2024-11-20 17:52:11.929612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:12.134 [2024-11-20 17:52:11.929628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:12.134 [2024-11-20 17:52:11.929644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:12.134 [2024-11-20 17:52:11.929655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:12.134 [2024-11-20 17:52:11.930177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:12.134 [2024-11-20 17:52:11.930191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:12.134 [2024-11-20 17:52:11.930205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:12.134 [2024-11-20 17:52:11.930214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:12.135 [2024-11-20 17:52:11.930904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:12.135 [2024-11-20 17:52:11.930917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:12.135 [2024-11-20 17:52:11.930932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:12.135 [2024-11-20 17:52:11.930940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:12.135 passed 00:25:12.135 Test: blockdev nvme passthru rw ...passed 00:25:12.135 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:52:12.015771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:12.135 [2024-11-20 17:52:12.015788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:12.135 [2024-11-20 17:52:12.016040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:12.135 [2024-11-20 17:52:12.016052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:12.135 [2024-11-20 17:52:12.016434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:12.135 [2024-11-20 17:52:12.016453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:12.135 [2024-11-20 17:52:12.016857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:12.135 [2024-11-20 17:52:12.016871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:12.135 passed 00:25:12.135 Test: blockdev nvme admin passthru ...passed 00:25:12.395 Test: blockdev copy ...passed 00:25:12.395 00:25:12.396 Run Summary: Type Total Ran Passed Failed Inactive 00:25:12.396 suites 1 1 n/a 0 0 00:25:12.396 tests 23 23 23 0 0 00:25:12.396 asserts 152 152 152 0 n/a 00:25:12.396 00:25:12.396 Elapsed time = 1.205 seconds 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:12.656 rmmod nvme_tcp 00:25:12.656 rmmod nvme_fabrics 00:25:12.656 rmmod nvme_keyring 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 2709257 ']' 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 2709257 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 2709257 ']' 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 2709257 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2709257 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2709257' 00:25:12.656 killing process with pid 2709257 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 2709257 00:25:12.656 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 2709257 00:25:12.917 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:12.917 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:12.917 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:12.917 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:25:12.917 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:25:12.917 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:12.917 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:25:12.917 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:12.917 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:12.917 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.917 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.917 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.466 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:15.466 00:25:15.466 real 0m12.566s 00:25:15.466 user 0m14.000s 00:25:15.466 sys 0m6.838s 00:25:15.466 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:15.466 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:15.466 ************************************ 00:25:15.466 END TEST nvmf_bdevio_no_huge 00:25:15.466 ************************************ 00:25:15.466 17:52:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:15.466 17:52:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:15.466 17:52:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:15.466 17:52:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:15.466 ************************************ 00:25:15.466 START TEST nvmf_tls 00:25:15.466 ************************************ 00:25:15.466 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:15.466 * Looking for test storage... 00:25:15.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:15.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.466 --rc genhtml_branch_coverage=1 00:25:15.466 --rc genhtml_function_coverage=1 00:25:15.466 --rc genhtml_legend=1 00:25:15.466 --rc geninfo_all_blocks=1 00:25:15.466 --rc geninfo_unexecuted_blocks=1 00:25:15.466 00:25:15.466 ' 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:15.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.466 --rc genhtml_branch_coverage=1 00:25:15.466 --rc genhtml_function_coverage=1 00:25:15.466 --rc genhtml_legend=1 00:25:15.466 --rc geninfo_all_blocks=1 00:25:15.466 --rc geninfo_unexecuted_blocks=1 00:25:15.466 00:25:15.466 ' 00:25:15.466 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:15.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.466 --rc genhtml_branch_coverage=1 00:25:15.466 --rc genhtml_function_coverage=1 00:25:15.466 --rc genhtml_legend=1 00:25:15.466 --rc geninfo_all_blocks=1 00:25:15.466 --rc geninfo_unexecuted_blocks=1 00:25:15.466 00:25:15.467 ' 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:15.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.467 --rc genhtml_branch_coverage=1 00:25:15.467 --rc genhtml_function_coverage=1 00:25:15.467 --rc genhtml_legend=1 00:25:15.467 --rc geninfo_all_blocks=1 00:25:15.467 --rc geninfo_unexecuted_blocks=1 00:25:15.467 00:25:15.467 ' 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:15.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:25:15.467 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:23.615 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:23.615 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:23.615 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:23.615 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:23.616 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # is_hw=yes 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:23.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:25:23.616 00:25:23.616 --- 10.0.0.2 ping statistics --- 00:25:23.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.616 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:23.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:25:23.616 00:25:23.616 --- 10.0.0.1 ping statistics --- 00:25:23.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.616 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # return 0 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=2713921 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 2713921 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2713921 ']' 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:23.616 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:23.616 [2024-11-20 17:52:22.741632] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:23.616 [2024-11-20 17:52:22.741702] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.616 [2024-11-20 17:52:22.835738] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.616 [2024-11-20 17:52:22.882504] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.616 [2024-11-20 17:52:22.882558] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.616 [2024-11-20 17:52:22.882566] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.616 [2024-11-20 17:52:22.882573] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.616 [2024-11-20 17:52:22.882579] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.616 [2024-11-20 17:52:22.882602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.879 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:23.879 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:23.879 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:23.879 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:23.879 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:23.879 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.879 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:25:23.879 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:25:23.879 true 00:25:23.879 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:23.879 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:25:24.139 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:25:24.139 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:25:24.139 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:24.400 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:24.400 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:25:24.662 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:25:24.662 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:25:24.662 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:25:24.662 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:24.662 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:25:24.924 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:25:24.924 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:25:24.924 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:24.924 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:25:25.186 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:25:25.186 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:25:25.186 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:25:25.186 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:25.186 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:25:25.449 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:25:25.449 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:25:25.449 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:25:25.711 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:25.711 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:25:25.711 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:25:25.711 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:25:25.711 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:25:25.711 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:25:25.711 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:25:25.711 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:25.711 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:25.711 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:25:25.711 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.hUYf667lLY 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.uqIil78t7i 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.hUYf667lLY 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.uqIil78t7i 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:25.973 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:25:26.234 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.hUYf667lLY 00:25:26.234 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hUYf667lLY 00:25:26.234 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:26.496 [2024-11-20 17:52:26.246549] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.496 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:26.757 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:26.757 [2024-11-20 17:52:26.583379] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:26.757 [2024-11-20 17:52:26.583588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.757 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:27.018 malloc0 00:25:27.018 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:27.279 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hUYf667lLY 00:25:27.279 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:27.541 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.hUYf667lLY 00:25:37.718 Initializing NVMe Controllers 00:25:37.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:37.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:37.718 Initialization complete. Launching workers. 00:25:37.718 ======================================================== 00:25:37.718 Latency(us) 00:25:37.718 Device Information : IOPS MiB/s Average min max 00:25:37.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18823.95 73.53 3400.12 1063.51 4169.19 00:25:37.718 ======================================================== 00:25:37.718 Total : 18823.95 73.53 3400.12 1063.51 4169.19 00:25:37.718 00:25:37.718 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hUYf667lLY 00:25:37.718 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:37.718 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:37.718 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:37.718 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hUYf667lLY 00:25:37.718 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:37.718 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2716747 00:25:37.718 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:37.718 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2716747 /var/tmp/bdevperf.sock 00:25:37.718 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:37.719 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2716747 ']' 00:25:37.719 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:37.719 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:37.719 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:37.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:37.719 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:37.719 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:37.719 [2024-11-20 17:52:37.422144] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:37.719 [2024-11-20 17:52:37.422207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716747 ] 00:25:37.719 [2024-11-20 17:52:37.499276] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.719 [2024-11-20 17:52:37.530599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:38.659 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:38.659 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:38.659 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hUYf667lLY 00:25:38.659 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:38.659 [2024-11-20 17:52:38.491551] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:38.659 TLSTESTn1 00:25:38.919 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:38.919 Running I/O for 10 seconds... 00:25:40.800 4877.00 IOPS, 19.05 MiB/s [2024-11-20T16:52:42.098Z] 5417.00 IOPS, 21.16 MiB/s [2024-11-20T16:52:43.037Z] 5218.00 IOPS, 20.38 MiB/s [2024-11-20T16:52:43.979Z] 5230.00 IOPS, 20.43 MiB/s [2024-11-20T16:52:44.922Z] 5268.80 IOPS, 20.58 MiB/s [2024-11-20T16:52:45.864Z] 5391.33 IOPS, 21.06 MiB/s [2024-11-20T16:52:46.805Z] 5410.86 IOPS, 21.14 MiB/s [2024-11-20T16:52:47.748Z] 5527.25 IOPS, 21.59 MiB/s [2024-11-20T16:52:49.134Z] 5640.67 IOPS, 22.03 MiB/s [2024-11-20T16:52:49.135Z] 5702.00 IOPS, 22.27 MiB/s 00:25:49.219 Latency(us) 00:25:49.219 [2024-11-20T16:52:49.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.219 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:49.219 Verification LBA range: start 0x0 length 0x2000 00:25:49.219 TLSTESTn1 : 10.01 5706.45 22.29 0.00 0.00 22398.16 5352.11 50681.17 00:25:49.219 [2024-11-20T16:52:49.135Z] =================================================================================================================== 00:25:49.219 [2024-11-20T16:52:49.135Z] Total : 5706.45 22.29 0.00 0.00 22398.16 5352.11 50681.17 00:25:49.219 { 00:25:49.219 "results": [ 00:25:49.219 { 00:25:49.219 "job": "TLSTESTn1", 00:25:49.219 "core_mask": "0x4", 00:25:49.219 "workload": "verify", 00:25:49.219 "status": "finished", 00:25:49.219 "verify_range": { 00:25:49.219 "start": 0, 00:25:49.219 "length": 8192 00:25:49.219 }, 00:25:49.219 "queue_depth": 128, 00:25:49.219 "io_size": 4096, 00:25:49.219 "runtime": 10.01445, 00:25:49.219 "iops": 5706.454173718976, 00:25:49.219 "mibps": 22.29083661608975, 00:25:49.219 "io_failed": 0, 00:25:49.219 "io_timeout": 0, 00:25:49.219 "avg_latency_us": 22398.15894307663, 00:25:49.219 "min_latency_us": 5352.106666666667, 00:25:49.219 "max_latency_us": 50681.17333333333 00:25:49.219 } 00:25:49.219 ], 00:25:49.219 "core_count": 1 00:25:49.219 } 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2716747 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2716747 ']' 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2716747 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2716747 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2716747' 00:25:49.219 killing process with pid 2716747 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2716747 00:25:49.219 Received shutdown signal, test time was about 10.000000 seconds 00:25:49.219 00:25:49.219 Latency(us) 00:25:49.219 [2024-11-20T16:52:49.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.219 [2024-11-20T16:52:49.135Z] =================================================================================================================== 00:25:49.219 [2024-11-20T16:52:49.135Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2716747 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uqIil78t7i 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uqIil78t7i 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uqIil78t7i 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.uqIil78t7i 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2718933 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2718933 /var/tmp/bdevperf.sock 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2718933 ']' 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:49.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:49.219 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:49.219 [2024-11-20 17:52:48.968736] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:49.219 [2024-11-20 17:52:48.968792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2718933 ] 00:25:49.219 [2024-11-20 17:52:49.042988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.219 [2024-11-20 17:52:49.069299] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.484 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:49.484 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:49.484 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uqIil78t7i 00:25:49.484 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:49.746 [2024-11-20 17:52:49.485067] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:49.746 [2024-11-20 17:52:49.494674] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:49.746 [2024-11-20 17:52:49.495276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69bfc0 (107): Transport endpoint is not connected 00:25:49.746 [2024-11-20 17:52:49.496271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69bfc0 (9): Bad file descriptor 00:25:49.746 [2024-11-20 17:52:49.497272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.746 [2024-11-20 17:52:49.497281] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:49.746 [2024-11-20 17:52:49.497287] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:25:49.746 [2024-11-20 17:52:49.497295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.746 request: 00:25:49.746 { 00:25:49.746 "name": "TLSTEST", 00:25:49.746 "trtype": "tcp", 00:25:49.746 "traddr": "10.0.0.2", 00:25:49.746 "adrfam": "ipv4", 00:25:49.746 "trsvcid": "4420", 00:25:49.746 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.746 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:49.746 "prchk_reftag": false, 00:25:49.746 "prchk_guard": false, 00:25:49.746 "hdgst": false, 00:25:49.746 "ddgst": false, 00:25:49.746 "psk": "key0", 00:25:49.746 "allow_unrecognized_csi": false, 00:25:49.746 "method": "bdev_nvme_attach_controller", 00:25:49.746 "req_id": 1 00:25:49.746 } 00:25:49.746 Got JSON-RPC error response 00:25:49.746 response: 00:25:49.746 { 00:25:49.746 "code": -5, 00:25:49.746 "message": "Input/output error" 00:25:49.746 } 00:25:49.746 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2718933 00:25:49.746 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2718933 ']' 00:25:49.746 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2718933 00:25:49.746 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:49.746 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:49.746 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2718933 00:25:49.746 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:49.746 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:49.746 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2718933' 00:25:49.746 killing process with pid 2718933 00:25:49.746 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2718933 00:25:49.746 Received shutdown signal, test time was about 10.000000 seconds 00:25:49.746 00:25:49.746 Latency(us) 00:25:49.746 [2024-11-20T16:52:49.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.746 [2024-11-20T16:52:49.663Z] =================================================================================================================== 00:25:49.747 [2024-11-20T16:52:49.663Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:49.747 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2718933 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hUYf667lLY 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hUYf667lLY 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hUYf667lLY 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hUYf667lLY 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2719145 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2719145 /var/tmp/bdevperf.sock 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2719145 ']' 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:50.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:50.009 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:50.009 [2024-11-20 17:52:49.750191] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:50.009 [2024-11-20 17:52:49.750248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2719145 ] 00:25:50.009 [2024-11-20 17:52:49.825180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.009 [2024-11-20 17:52:49.852537] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.269 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:50.270 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:50.270 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hUYf667lLY 00:25:50.270 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:25:50.530 [2024-11-20 17:52:50.268296] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:50.530 [2024-11-20 17:52:50.276502] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:50.530 [2024-11-20 17:52:50.276521] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:50.530 [2024-11-20 17:52:50.276540] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:50.530 [2024-11-20 17:52:50.276563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xceffc0 (107): Transport endpoint is not connected 00:25:50.530 [2024-11-20 17:52:50.277551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xceffc0 (9): Bad file descriptor 00:25:50.530 [2024-11-20 17:52:50.278553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.530 [2024-11-20 17:52:50.278561] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:50.530 [2024-11-20 17:52:50.278567] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:25:50.530 [2024-11-20 17:52:50.278575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.530 request: 00:25:50.530 { 00:25:50.530 "name": "TLSTEST", 00:25:50.530 "trtype": "tcp", 00:25:50.530 "traddr": "10.0.0.2", 00:25:50.530 "adrfam": "ipv4", 00:25:50.530 "trsvcid": "4420", 00:25:50.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:50.531 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:50.531 "prchk_reftag": false, 00:25:50.531 "prchk_guard": false, 00:25:50.531 "hdgst": false, 00:25:50.531 "ddgst": false, 00:25:50.531 "psk": "key0", 00:25:50.531 "allow_unrecognized_csi": false, 00:25:50.531 "method": "bdev_nvme_attach_controller", 00:25:50.531 "req_id": 1 00:25:50.531 } 00:25:50.531 Got JSON-RPC error response 00:25:50.531 response: 00:25:50.531 { 00:25:50.531 "code": -5, 00:25:50.531 "message": "Input/output error" 00:25:50.531 } 00:25:50.531 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2719145 00:25:50.531 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2719145 ']' 00:25:50.531 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2719145 00:25:50.531 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:50.531 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:50.531 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2719145 00:25:50.531 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:50.531 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:50.531 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2719145' 00:25:50.531 killing process with pid 2719145 00:25:50.531 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2719145 00:25:50.531 Received shutdown signal, test time was about 10.000000 seconds 00:25:50.531 00:25:50.531 Latency(us) 00:25:50.531 [2024-11-20T16:52:50.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.531 [2024-11-20T16:52:50.447Z] =================================================================================================================== 00:25:50.531 [2024-11-20T16:52:50.447Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:50.531 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2719145 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hUYf667lLY 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hUYf667lLY 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hUYf667lLY 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hUYf667lLY 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2719282 00:25:50.792 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:50.793 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2719282 /var/tmp/bdevperf.sock 00:25:50.793 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:50.793 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2719282 ']' 00:25:50.793 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:50.793 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:50.793 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:50.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:50.793 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:50.793 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:50.793 [2024-11-20 17:52:50.526707] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:50.793 [2024-11-20 17:52:50.526763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2719282 ] 00:25:50.793 [2024-11-20 17:52:50.603906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.793 [2024-11-20 17:52:50.629955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:51.734 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:51.734 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:51.734 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hUYf667lLY 00:25:51.734 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:51.994 [2024-11-20 17:52:51.655354] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:51.994 [2024-11-20 17:52:51.659902] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:51.994 [2024-11-20 17:52:51.659920] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:51.994 [2024-11-20 17:52:51.659938] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:51.994 [2024-11-20 17:52:51.660597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5efc0 (107): Transport endpoint is not connected 00:25:51.994 [2024-11-20 17:52:51.661592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5efc0 (9): Bad file descriptor 00:25:51.994 [2024-11-20 17:52:51.662594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:51.994 [2024-11-20 17:52:51.662603] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:51.994 [2024-11-20 17:52:51.662609] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:25:51.994 [2024-11-20 17:52:51.662617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:51.994 request: 00:25:51.994 { 00:25:51.994 "name": "TLSTEST", 00:25:51.994 "trtype": "tcp", 00:25:51.994 "traddr": "10.0.0.2", 00:25:51.994 "adrfam": "ipv4", 00:25:51.994 "trsvcid": "4420", 00:25:51.994 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:51.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:51.994 "prchk_reftag": false, 00:25:51.994 "prchk_guard": false, 00:25:51.994 "hdgst": false, 00:25:51.994 "ddgst": false, 00:25:51.994 "psk": "key0", 00:25:51.994 "allow_unrecognized_csi": false, 00:25:51.994 "method": "bdev_nvme_attach_controller", 00:25:51.994 "req_id": 1 00:25:51.994 } 00:25:51.994 Got JSON-RPC error response 00:25:51.994 response: 00:25:51.994 { 00:25:51.994 "code": -5, 00:25:51.994 "message": "Input/output error" 00:25:51.994 } 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2719282 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2719282 ']' 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2719282 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2719282 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2719282' 00:25:51.994 killing process with pid 2719282 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2719282 00:25:51.994 Received shutdown signal, test time was about 10.000000 seconds 00:25:51.994 00:25:51.994 Latency(us) 00:25:51.994 [2024-11-20T16:52:51.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.994 [2024-11-20T16:52:51.910Z] =================================================================================================================== 00:25:51.994 [2024-11-20T16:52:51.910Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2719282 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:51.994 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:51.995 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:51.995 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:51.995 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:51.995 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:51.995 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:51.995 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:25:51.995 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:51.995 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2719617 00:25:51.995 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:51.995 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2719617 /var/tmp/bdevperf.sock 00:25:51.995 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:51.995 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2719617 ']' 00:25:51.995 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:51.995 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:51.995 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:51.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:51.995 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:51.995 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:51.995 [2024-11-20 17:52:51.900204] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:51.995 [2024-11-20 17:52:51.900260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2719617 ] 00:25:52.254 [2024-11-20 17:52:51.973089] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.254 [2024-11-20 17:52:52.000798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.254 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:52.254 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:52.254 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:25:52.514 [2024-11-20 17:52:52.219782] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:25:52.514 [2024-11-20 17:52:52.219804] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:52.514 request: 00:25:52.514 { 00:25:52.514 "name": "key0", 00:25:52.514 "path": "", 00:25:52.514 "method": "keyring_file_add_key", 00:25:52.514 "req_id": 1 00:25:52.514 } 00:25:52.514 Got JSON-RPC error response 00:25:52.514 response: 00:25:52.514 { 00:25:52.514 "code": -1, 00:25:52.514 "message": "Operation not permitted" 00:25:52.514 } 00:25:52.514 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:52.514 [2024-11-20 17:52:52.388279] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:52.514 [2024-11-20 17:52:52.388300] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:25:52.514 request: 00:25:52.514 { 00:25:52.514 "name": "TLSTEST", 00:25:52.514 "trtype": "tcp", 00:25:52.514 "traddr": "10.0.0.2", 00:25:52.514 "adrfam": "ipv4", 00:25:52.514 "trsvcid": "4420", 00:25:52.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:52.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:52.514 "prchk_reftag": false, 00:25:52.514 "prchk_guard": false, 00:25:52.514 "hdgst": false, 00:25:52.514 "ddgst": false, 00:25:52.514 "psk": "key0", 00:25:52.514 "allow_unrecognized_csi": false, 00:25:52.514 "method": "bdev_nvme_attach_controller", 00:25:52.514 "req_id": 1 00:25:52.514 } 00:25:52.514 Got JSON-RPC error response 00:25:52.514 response: 00:25:52.514 { 00:25:52.514 "code": -126, 00:25:52.514 "message": "Required key not available" 00:25:52.514 } 00:25:52.514 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2719617 00:25:52.514 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2719617 ']' 00:25:52.514 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2719617 00:25:52.514 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:52.514 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:52.514 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2719617 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2719617' 00:25:52.774 killing process with pid 2719617 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2719617 00:25:52.774 Received shutdown signal, test time was about 10.000000 seconds 00:25:52.774 00:25:52.774 Latency(us) 00:25:52.774 [2024-11-20T16:52:52.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.774 [2024-11-20T16:52:52.690Z] =================================================================================================================== 00:25:52.774 [2024-11-20T16:52:52.690Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2719617 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2713921 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2713921 ']' 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2713921 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2713921 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2713921' 00:25:52.774 killing process with pid 2713921 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2713921 00:25:52.774 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2713921 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.nIKuk6pLG3 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.nIKuk6pLG3 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=2719688 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 2719688 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2719688 ']' 00:25:53.035 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.036 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:53.036 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.036 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:53.036 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:53.036 [2024-11-20 17:52:52.887779] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:53.036 [2024-11-20 17:52:52.887848] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.297 [2024-11-20 17:52:52.973596] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.297 [2024-11-20 17:52:53.005908] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.297 [2024-11-20 17:52:53.005946] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.297 [2024-11-20 17:52:53.005952] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.297 [2024-11-20 17:52:53.005957] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.297 [2024-11-20 17:52:53.005962] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.297 [2024-11-20 17:52:53.005983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.870 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:53.870 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:53.870 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:53.871 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:53.871 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:53.871 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.871 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.nIKuk6pLG3 00:25:53.871 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.nIKuk6pLG3 00:25:53.871 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:54.132 [2024-11-20 17:52:53.870251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.132 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:54.392 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:54.392 [2024-11-20 17:52:54.227119] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:54.392 [2024-11-20 17:52:54.227327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.392 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:54.653 malloc0 00:25:54.653 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:54.914 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.nIKuk6pLG3 00:25:54.914 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:55.176 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nIKuk6pLG3 00:25:55.176 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:55.176 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:55.176 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:55.176 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.nIKuk6pLG3 00:25:55.176 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:55.176 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2720241 00:25:55.176 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:55.176 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2720241 /var/tmp/bdevperf.sock 00:25:55.176 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:55.176 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2720241 ']' 00:25:55.176 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:55.176 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:55.176 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:55.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:55.176 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:55.176 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:55.176 [2024-11-20 17:52:55.018089] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:55.176 [2024-11-20 17:52:55.018144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2720241 ] 00:25:55.436 [2024-11-20 17:52:55.092258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.437 [2024-11-20 17:52:55.120324] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.437 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:55.437 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:55.437 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nIKuk6pLG3 00:25:55.698 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:55.698 [2024-11-20 17:52:55.528052] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:55.698 TLSTESTn1 00:25:55.959 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:55.959 Running I/O for 10 seconds... 00:25:57.841 6287.00 IOPS, 24.56 MiB/s [2024-11-20T16:52:59.142Z] 6076.00 IOPS, 23.73 MiB/s [2024-11-20T16:53:00.083Z] 6016.00 IOPS, 23.50 MiB/s [2024-11-20T16:53:01.026Z] 6084.00 IOPS, 23.77 MiB/s [2024-11-20T16:53:01.968Z] 6161.40 IOPS, 24.07 MiB/s [2024-11-20T16:53:02.909Z] 6198.83 IOPS, 24.21 MiB/s [2024-11-20T16:53:03.853Z] 6227.71 IOPS, 24.33 MiB/s [2024-11-20T16:53:04.795Z] 6182.88 IOPS, 24.15 MiB/s [2024-11-20T16:53:05.737Z] 6146.33 IOPS, 24.01 MiB/s [2024-11-20T16:53:05.998Z] 6153.80 IOPS, 24.04 MiB/s 00:26:06.082 Latency(us) 00:26:06.082 [2024-11-20T16:53:05.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.082 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:06.082 Verification LBA range: start 0x0 length 0x2000 00:26:06.082 TLSTESTn1 : 10.01 6158.54 24.06 0.00 0.00 20754.42 4805.97 25231.36 00:26:06.082 [2024-11-20T16:53:05.998Z] =================================================================================================================== 00:26:06.082 [2024-11-20T16:53:05.998Z] Total : 6158.54 24.06 0.00 0.00 20754.42 4805.97 25231.36 00:26:06.082 { 00:26:06.082 "results": [ 00:26:06.082 { 00:26:06.082 "job": "TLSTESTn1", 00:26:06.082 "core_mask": "0x4", 00:26:06.082 "workload": "verify", 00:26:06.082 "status": "finished", 00:26:06.082 "verify_range": { 00:26:06.082 "start": 0, 00:26:06.082 "length": 8192 00:26:06.082 }, 00:26:06.082 "queue_depth": 128, 00:26:06.082 "io_size": 4096, 00:26:06.082 "runtime": 10.012758, 00:26:06.082 "iops": 6158.542930928721, 00:26:06.082 "mibps": 24.056808323940317, 00:26:06.082 "io_failed": 0, 00:26:06.082 "io_timeout": 0, 00:26:06.082 "avg_latency_us": 20754.423940494722, 00:26:06.082 "min_latency_us": 4805.973333333333, 00:26:06.082 "max_latency_us": 25231.36 00:26:06.082 } 00:26:06.082 ], 00:26:06.082 "core_count": 1 00:26:06.082 } 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2720241 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2720241 ']' 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2720241 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2720241 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2720241' 00:26:06.082 killing process with pid 2720241 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2720241 00:26:06.082 Received shutdown signal, test time was about 10.000000 seconds 00:26:06.082 00:26:06.082 Latency(us) 00:26:06.082 [2024-11-20T16:53:05.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.082 [2024-11-20T16:53:05.998Z] =================================================================================================================== 00:26:06.082 [2024-11-20T16:53:05.998Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2720241 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.nIKuk6pLG3 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nIKuk6pLG3 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nIKuk6pLG3 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nIKuk6pLG3 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.nIKuk6pLG3 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2722305 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2722305 /var/tmp/bdevperf.sock 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2722305 ']' 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:06.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:06.082 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:06.343 [2024-11-20 17:53:05.996974] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:06.343 [2024-11-20 17:53:05.997038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2722305 ] 00:26:06.343 [2024-11-20 17:53:06.073553] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.343 [2024-11-20 17:53:06.099874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.343 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:06.343 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:06.343 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nIKuk6pLG3 00:26:06.604 [2024-11-20 17:53:06.327055] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nIKuk6pLG3': 0100666 00:26:06.604 [2024-11-20 17:53:06.327082] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:06.604 request: 00:26:06.604 { 00:26:06.604 "name": "key0", 00:26:06.604 "path": "/tmp/tmp.nIKuk6pLG3", 00:26:06.604 "method": "keyring_file_add_key", 00:26:06.604 "req_id": 1 00:26:06.604 } 00:26:06.604 Got JSON-RPC error response 00:26:06.604 response: 00:26:06.604 { 00:26:06.604 "code": -1, 00:26:06.604 "message": "Operation not permitted" 00:26:06.604 } 00:26:06.604 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:06.604 [2024-11-20 17:53:06.507578] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:06.604 [2024-11-20 17:53:06.507603] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:26:06.604 request: 00:26:06.604 { 00:26:06.604 "name": "TLSTEST", 00:26:06.604 "trtype": "tcp", 00:26:06.604 "traddr": "10.0.0.2", 00:26:06.604 "adrfam": "ipv4", 00:26:06.604 "trsvcid": "4420", 00:26:06.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:06.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:06.604 "prchk_reftag": false, 00:26:06.604 "prchk_guard": false, 00:26:06.604 "hdgst": false, 00:26:06.604 "ddgst": false, 00:26:06.604 "psk": "key0", 00:26:06.604 "allow_unrecognized_csi": false, 00:26:06.604 "method": "bdev_nvme_attach_controller", 00:26:06.604 "req_id": 1 00:26:06.604 } 00:26:06.604 Got JSON-RPC error response 00:26:06.604 response: 00:26:06.604 { 00:26:06.604 "code": -126, 00:26:06.604 "message": "Required key not available" 00:26:06.604 } 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2722305 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2722305 ']' 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2722305 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2722305 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2722305' 00:26:06.865 killing process with pid 2722305 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2722305 00:26:06.865 Received shutdown signal, test time was about 10.000000 seconds 00:26:06.865 00:26:06.865 Latency(us) 00:26:06.865 [2024-11-20T16:53:06.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.865 [2024-11-20T16:53:06.781Z] =================================================================================================================== 00:26:06.865 [2024-11-20T16:53:06.781Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2722305 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2719688 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2719688 ']' 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2719688 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2719688 00:26:06.865 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:06.866 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:06.866 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2719688' 00:26:06.866 killing process with pid 2719688 00:26:06.866 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2719688 00:26:06.866 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2719688 00:26:07.127 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:26:07.127 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:07.127 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:07.127 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:07.127 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=2722462 00:26:07.127 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 2722462 00:26:07.127 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:07.127 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2722462 ']' 00:26:07.127 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.127 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:07.127 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.127 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:07.127 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:07.127 [2024-11-20 17:53:06.952341] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:07.127 [2024-11-20 17:53:06.952393] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.127 [2024-11-20 17:53:07.027994] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.388 [2024-11-20 17:53:07.059602] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.388 [2024-11-20 17:53:07.059646] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.388 [2024-11-20 17:53:07.059652] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.388 [2024-11-20 17:53:07.059657] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.388 [2024-11-20 17:53:07.059661] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.388 [2024-11-20 17:53:07.059680] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.033 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:08.033 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:08.033 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:08.033 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:08.033 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:08.033 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:08.033 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.nIKuk6pLG3 00:26:08.033 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:26:08.033 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.nIKuk6pLG3 00:26:08.033 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:26:08.033 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.033 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:26:08.033 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.033 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.nIKuk6pLG3 00:26:08.033 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.nIKuk6pLG3 00:26:08.033 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:08.297 [2024-11-20 17:53:07.948255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.297 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:08.297 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:08.558 [2024-11-20 17:53:08.309128] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:08.558 [2024-11-20 17:53:08.309331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.558 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:08.819 malloc0 00:26:08.819 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:08.819 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.nIKuk6pLG3 00:26:09.079 [2024-11-20 17:53:08.860959] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nIKuk6pLG3': 0100666 00:26:09.079 [2024-11-20 17:53:08.860985] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:09.079 request: 00:26:09.079 { 00:26:09.079 "name": "key0", 00:26:09.079 "path": "/tmp/tmp.nIKuk6pLG3", 00:26:09.079 "method": "keyring_file_add_key", 00:26:09.079 "req_id": 1 00:26:09.079 } 00:26:09.079 Got JSON-RPC error response 00:26:09.079 response: 00:26:09.079 { 00:26:09.079 "code": -1, 00:26:09.079 "message": "Operation not permitted" 00:26:09.079 } 00:26:09.079 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:26:09.340 [2024-11-20 17:53:09.037425] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:26:09.340 [2024-11-20 17:53:09.037458] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:26:09.340 request: 00:26:09.340 { 00:26:09.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:09.340 "host": "nqn.2016-06.io.spdk:host1", 00:26:09.340 "psk": "key0", 00:26:09.340 "method": "nvmf_subsystem_add_host", 00:26:09.340 "req_id": 1 00:26:09.340 } 00:26:09.340 Got JSON-RPC error response 00:26:09.340 response: 00:26:09.340 { 00:26:09.340 "code": -32603, 00:26:09.340 "message": "Internal error" 00:26:09.340 } 00:26:09.340 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:26:09.340 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:09.340 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:09.340 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:09.340 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2722462 00:26:09.340 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2722462 ']' 00:26:09.340 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2722462 00:26:09.340 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:09.340 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:09.340 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2722462 00:26:09.340 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:09.340 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:09.340 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2722462' 00:26:09.340 killing process with pid 2722462 00:26:09.340 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2722462 00:26:09.341 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2722462 00:26:09.341 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.nIKuk6pLG3 00:26:09.341 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:26:09.341 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:09.341 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:09.341 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:09.601 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=2723015 00:26:09.601 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 2723015 00:26:09.601 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:09.601 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2723015 ']' 00:26:09.601 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.601 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:09.601 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.601 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:09.601 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:09.601 [2024-11-20 17:53:09.305498] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:09.601 [2024-11-20 17:53:09.305550] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.601 [2024-11-20 17:53:09.386521] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.601 [2024-11-20 17:53:09.413688] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.601 [2024-11-20 17:53:09.413726] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.601 [2024-11-20 17:53:09.413731] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.601 [2024-11-20 17:53:09.413736] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.601 [2024-11-20 17:53:09.413741] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.601 [2024-11-20 17:53:09.413758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.541 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:10.541 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:10.541 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:10.541 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:10.541 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:10.541 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:10.541 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.nIKuk6pLG3 00:26:10.541 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.nIKuk6pLG3 00:26:10.541 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:10.541 [2024-11-20 17:53:10.308202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.541 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:10.801 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:10.801 [2024-11-20 17:53:10.665074] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:10.801 [2024-11-20 17:53:10.665280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.801 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:11.062 malloc0 00:26:11.062 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:11.322 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.nIKuk6pLG3 00:26:11.322 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:26:11.583 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:11.583 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2723376 00:26:11.583 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:11.583 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2723376 /var/tmp/bdevperf.sock 00:26:11.583 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2723376 ']' 00:26:11.583 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:11.583 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:11.583 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:11.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:11.583 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:11.583 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:11.583 [2024-11-20 17:53:11.449323] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:11.583 [2024-11-20 17:53:11.449375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723376 ] 00:26:11.843 [2024-11-20 17:53:11.524416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.843 [2024-11-20 17:53:11.554336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:11.843 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:11.843 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:11.843 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nIKuk6pLG3 00:26:12.163 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:12.163 [2024-11-20 17:53:11.961695] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:12.163 TLSTESTn1 00:26:12.163 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:26:12.424 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:26:12.424 "subsystems": [ 00:26:12.424 { 00:26:12.424 "subsystem": "keyring", 00:26:12.424 "config": [ 00:26:12.424 { 00:26:12.424 "method": "keyring_file_add_key", 00:26:12.424 "params": { 00:26:12.424 "name": "key0", 00:26:12.424 "path": "/tmp/tmp.nIKuk6pLG3" 00:26:12.424 } 00:26:12.424 } 00:26:12.424 ] 00:26:12.424 }, 00:26:12.424 { 00:26:12.424 "subsystem": "iobuf", 00:26:12.424 "config": [ 00:26:12.424 { 00:26:12.424 "method": "iobuf_set_options", 00:26:12.424 "params": { 00:26:12.424 "small_pool_count": 8192, 00:26:12.424 "large_pool_count": 1024, 00:26:12.424 "small_bufsize": 8192, 00:26:12.424 "large_bufsize": 135168 00:26:12.425 } 00:26:12.425 } 00:26:12.425 ] 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "subsystem": "sock", 00:26:12.425 "config": [ 00:26:12.425 { 00:26:12.425 "method": "sock_set_default_impl", 00:26:12.425 "params": { 00:26:12.425 "impl_name": "posix" 00:26:12.425 } 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "method": "sock_impl_set_options", 00:26:12.425 "params": { 00:26:12.425 "impl_name": "ssl", 00:26:12.425 "recv_buf_size": 4096, 00:26:12.425 "send_buf_size": 4096, 00:26:12.425 "enable_recv_pipe": true, 00:26:12.425 "enable_quickack": false, 00:26:12.425 "enable_placement_id": 0, 00:26:12.425 "enable_zerocopy_send_server": true, 00:26:12.425 "enable_zerocopy_send_client": false, 00:26:12.425 "zerocopy_threshold": 0, 00:26:12.425 "tls_version": 0, 00:26:12.425 "enable_ktls": false 00:26:12.425 } 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "method": "sock_impl_set_options", 00:26:12.425 "params": { 00:26:12.425 "impl_name": "posix", 00:26:12.425 "recv_buf_size": 2097152, 00:26:12.425 "send_buf_size": 2097152, 00:26:12.425 "enable_recv_pipe": true, 00:26:12.425 "enable_quickack": false, 00:26:12.425 "enable_placement_id": 0, 00:26:12.425 "enable_zerocopy_send_server": true, 00:26:12.425 "enable_zerocopy_send_client": false, 00:26:12.425 "zerocopy_threshold": 0, 00:26:12.425 "tls_version": 0, 00:26:12.425 "enable_ktls": false 00:26:12.425 } 00:26:12.425 } 00:26:12.425 ] 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "subsystem": "vmd", 00:26:12.425 "config": [] 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "subsystem": "accel", 00:26:12.425 "config": [ 00:26:12.425 { 00:26:12.425 "method": "accel_set_options", 00:26:12.425 "params": { 00:26:12.425 "small_cache_size": 128, 00:26:12.425 "large_cache_size": 16, 00:26:12.425 "task_count": 2048, 00:26:12.425 "sequence_count": 2048, 00:26:12.425 "buf_count": 2048 00:26:12.425 } 00:26:12.425 } 00:26:12.425 ] 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "subsystem": "bdev", 00:26:12.425 "config": [ 00:26:12.425 { 00:26:12.425 "method": "bdev_set_options", 00:26:12.425 "params": { 00:26:12.425 "bdev_io_pool_size": 65535, 00:26:12.425 "bdev_io_cache_size": 256, 00:26:12.425 "bdev_auto_examine": true, 00:26:12.425 "iobuf_small_cache_size": 128, 00:26:12.425 "iobuf_large_cache_size": 16 00:26:12.425 } 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "method": "bdev_raid_set_options", 00:26:12.425 "params": { 00:26:12.425 "process_window_size_kb": 1024, 00:26:12.425 "process_max_bandwidth_mb_sec": 0 00:26:12.425 } 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "method": "bdev_iscsi_set_options", 00:26:12.425 "params": { 00:26:12.425 "timeout_sec": 30 00:26:12.425 } 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "method": "bdev_nvme_set_options", 00:26:12.425 "params": { 00:26:12.425 "action_on_timeout": "none", 00:26:12.425 "timeout_us": 0, 00:26:12.425 "timeout_admin_us": 0, 00:26:12.425 "keep_alive_timeout_ms": 10000, 00:26:12.425 "arbitration_burst": 0, 00:26:12.425 "low_priority_weight": 0, 00:26:12.425 "medium_priority_weight": 0, 00:26:12.425 "high_priority_weight": 0, 00:26:12.425 "nvme_adminq_poll_period_us": 10000, 00:26:12.425 "nvme_ioq_poll_period_us": 0, 00:26:12.425 "io_queue_requests": 0, 00:26:12.425 "delay_cmd_submit": true, 00:26:12.425 "transport_retry_count": 4, 00:26:12.425 "bdev_retry_count": 3, 00:26:12.425 "transport_ack_timeout": 0, 00:26:12.425 "ctrlr_loss_timeout_sec": 0, 00:26:12.425 "reconnect_delay_sec": 0, 00:26:12.425 "fast_io_fail_timeout_sec": 0, 00:26:12.425 "disable_auto_failback": false, 00:26:12.425 "generate_uuids": false, 00:26:12.425 "transport_tos": 0, 00:26:12.425 "nvme_error_stat": false, 00:26:12.425 "rdma_srq_size": 0, 00:26:12.425 "io_path_stat": false, 00:26:12.425 "allow_accel_sequence": false, 00:26:12.425 "rdma_max_cq_size": 0, 00:26:12.425 "rdma_cm_event_timeout_ms": 0, 00:26:12.425 "dhchap_digests": [ 00:26:12.425 "sha256", 00:26:12.425 "sha384", 00:26:12.425 "sha512" 00:26:12.425 ], 00:26:12.425 "dhchap_dhgroups": [ 00:26:12.425 "null", 00:26:12.425 "ffdhe2048", 00:26:12.425 "ffdhe3072", 00:26:12.425 "ffdhe4096", 00:26:12.425 "ffdhe6144", 00:26:12.425 "ffdhe8192" 00:26:12.425 ] 00:26:12.425 } 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "method": "bdev_nvme_set_hotplug", 00:26:12.425 "params": { 00:26:12.425 "period_us": 100000, 00:26:12.425 "enable": false 00:26:12.425 } 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "method": "bdev_malloc_create", 00:26:12.425 "params": { 00:26:12.425 "name": "malloc0", 00:26:12.425 "num_blocks": 8192, 00:26:12.425 "block_size": 4096, 00:26:12.425 "physical_block_size": 4096, 00:26:12.425 "uuid": "d21e3299-344e-4e66-89c9-68188870ca37", 00:26:12.425 "optimal_io_boundary": 0, 00:26:12.425 "md_size": 0, 00:26:12.425 "dif_type": 0, 00:26:12.425 "dif_is_head_of_md": false, 00:26:12.425 "dif_pi_format": 0 00:26:12.425 } 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "method": "bdev_wait_for_examine" 00:26:12.425 } 00:26:12.425 ] 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "subsystem": "nbd", 00:26:12.425 "config": [] 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "subsystem": "scheduler", 00:26:12.425 "config": [ 00:26:12.425 { 00:26:12.425 "method": "framework_set_scheduler", 00:26:12.425 "params": { 00:26:12.425 "name": "static" 00:26:12.425 } 00:26:12.425 } 00:26:12.425 ] 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "subsystem": "nvmf", 00:26:12.425 "config": [ 00:26:12.425 { 00:26:12.425 "method": "nvmf_set_config", 00:26:12.425 "params": { 00:26:12.425 "discovery_filter": "match_any", 00:26:12.425 "admin_cmd_passthru": { 00:26:12.425 "identify_ctrlr": false 00:26:12.425 }, 00:26:12.425 "dhchap_digests": [ 00:26:12.425 "sha256", 00:26:12.425 "sha384", 00:26:12.425 "sha512" 00:26:12.425 ], 00:26:12.425 "dhchap_dhgroups": [ 00:26:12.425 "null", 00:26:12.425 "ffdhe2048", 00:26:12.425 "ffdhe3072", 00:26:12.425 "ffdhe4096", 00:26:12.425 "ffdhe6144", 00:26:12.425 "ffdhe8192" 00:26:12.425 ] 00:26:12.425 } 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "method": "nvmf_set_max_subsystems", 00:26:12.425 "params": { 00:26:12.425 "max_subsystems": 1024 00:26:12.425 } 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "method": "nvmf_set_crdt", 00:26:12.425 "params": { 00:26:12.425 "crdt1": 0, 00:26:12.425 "crdt2": 0, 00:26:12.425 "crdt3": 0 00:26:12.425 } 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "method": "nvmf_create_transport", 00:26:12.425 "params": { 00:26:12.425 "trtype": "TCP", 00:26:12.425 "max_queue_depth": 128, 00:26:12.425 "max_io_qpairs_per_ctrlr": 127, 00:26:12.425 "in_capsule_data_size": 4096, 00:26:12.425 "max_io_size": 131072, 00:26:12.425 "io_unit_size": 131072, 00:26:12.425 "max_aq_depth": 128, 00:26:12.425 "num_shared_buffers": 511, 00:26:12.425 "buf_cache_size": 4294967295, 00:26:12.425 "dif_insert_or_strip": false, 00:26:12.425 "zcopy": false, 00:26:12.425 "c2h_success": false, 00:26:12.425 "sock_priority": 0, 00:26:12.425 "abort_timeout_sec": 1, 00:26:12.425 "ack_timeout": 0, 00:26:12.425 "data_wr_pool_size": 0 00:26:12.425 } 00:26:12.425 }, 00:26:12.425 { 00:26:12.425 "method": "nvmf_create_subsystem", 00:26:12.425 "params": { 00:26:12.426 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:12.426 "allow_any_host": false, 00:26:12.426 "serial_number": "SPDK00000000000001", 00:26:12.426 "model_number": "SPDK bdev Controller", 00:26:12.426 "max_namespaces": 10, 00:26:12.426 "min_cntlid": 1, 00:26:12.426 "max_cntlid": 65519, 00:26:12.426 "ana_reporting": false 00:26:12.426 } 00:26:12.426 }, 00:26:12.426 { 00:26:12.426 "method": "nvmf_subsystem_add_host", 00:26:12.426 "params": { 00:26:12.426 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:12.426 "host": "nqn.2016-06.io.spdk:host1", 00:26:12.426 "psk": "key0" 00:26:12.426 } 00:26:12.426 }, 00:26:12.426 { 00:26:12.426 "method": "nvmf_subsystem_add_ns", 00:26:12.426 "params": { 00:26:12.426 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:12.426 "namespace": { 00:26:12.426 "nsid": 1, 00:26:12.426 "bdev_name": "malloc0", 00:26:12.426 "nguid": "D21E3299344E4E6689C968188870CA37", 00:26:12.426 "uuid": "d21e3299-344e-4e66-89c9-68188870ca37", 00:26:12.426 "no_auto_visible": false 00:26:12.426 } 00:26:12.426 } 00:26:12.426 }, 00:26:12.426 { 00:26:12.426 "method": "nvmf_subsystem_add_listener", 00:26:12.426 "params": { 00:26:12.426 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:12.426 "listen_address": { 00:26:12.426 "trtype": "TCP", 00:26:12.426 "adrfam": "IPv4", 00:26:12.426 "traddr": "10.0.0.2", 00:26:12.426 "trsvcid": "4420" 00:26:12.426 }, 00:26:12.426 "secure_channel": true 00:26:12.426 } 00:26:12.426 } 00:26:12.426 ] 00:26:12.426 } 00:26:12.426 ] 00:26:12.426 }' 00:26:12.426 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:12.687 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:26:12.687 "subsystems": [ 00:26:12.687 { 00:26:12.687 "subsystem": "keyring", 00:26:12.687 "config": [ 00:26:12.687 { 00:26:12.687 "method": "keyring_file_add_key", 00:26:12.687 "params": { 00:26:12.687 "name": "key0", 00:26:12.687 "path": "/tmp/tmp.nIKuk6pLG3" 00:26:12.687 } 00:26:12.687 } 00:26:12.687 ] 00:26:12.687 }, 00:26:12.687 { 00:26:12.687 "subsystem": "iobuf", 00:26:12.687 "config": [ 00:26:12.687 { 00:26:12.687 "method": "iobuf_set_options", 00:26:12.687 "params": { 00:26:12.687 "small_pool_count": 8192, 00:26:12.687 "large_pool_count": 1024, 00:26:12.687 "small_bufsize": 8192, 00:26:12.687 "large_bufsize": 135168 00:26:12.687 } 00:26:12.687 } 00:26:12.687 ] 00:26:12.687 }, 00:26:12.687 { 00:26:12.687 "subsystem": "sock", 00:26:12.687 "config": [ 00:26:12.687 { 00:26:12.687 "method": "sock_set_default_impl", 00:26:12.687 "params": { 00:26:12.687 "impl_name": "posix" 00:26:12.687 } 00:26:12.687 }, 00:26:12.687 { 00:26:12.687 "method": "sock_impl_set_options", 00:26:12.687 "params": { 00:26:12.687 "impl_name": "ssl", 00:26:12.687 "recv_buf_size": 4096, 00:26:12.687 "send_buf_size": 4096, 00:26:12.687 "enable_recv_pipe": true, 00:26:12.687 "enable_quickack": false, 00:26:12.687 "enable_placement_id": 0, 00:26:12.687 "enable_zerocopy_send_server": true, 00:26:12.687 "enable_zerocopy_send_client": false, 00:26:12.687 "zerocopy_threshold": 0, 00:26:12.687 "tls_version": 0, 00:26:12.687 "enable_ktls": false 00:26:12.687 } 00:26:12.687 }, 00:26:12.687 { 00:26:12.687 "method": "sock_impl_set_options", 00:26:12.687 "params": { 00:26:12.687 "impl_name": "posix", 00:26:12.687 "recv_buf_size": 2097152, 00:26:12.687 "send_buf_size": 2097152, 00:26:12.687 "enable_recv_pipe": true, 00:26:12.687 "enable_quickack": false, 00:26:12.687 "enable_placement_id": 0, 00:26:12.687 "enable_zerocopy_send_server": true, 00:26:12.687 "enable_zerocopy_send_client": false, 00:26:12.687 "zerocopy_threshold": 0, 00:26:12.687 "tls_version": 0, 00:26:12.687 "enable_ktls": false 00:26:12.687 } 00:26:12.687 } 00:26:12.687 ] 00:26:12.687 }, 00:26:12.687 { 00:26:12.687 "subsystem": "vmd", 00:26:12.687 "config": [] 00:26:12.687 }, 00:26:12.687 { 00:26:12.687 "subsystem": "accel", 00:26:12.687 "config": [ 00:26:12.687 { 00:26:12.687 "method": "accel_set_options", 00:26:12.687 "params": { 00:26:12.687 "small_cache_size": 128, 00:26:12.687 "large_cache_size": 16, 00:26:12.687 "task_count": 2048, 00:26:12.687 "sequence_count": 2048, 00:26:12.687 "buf_count": 2048 00:26:12.687 } 00:26:12.687 } 00:26:12.687 ] 00:26:12.687 }, 00:26:12.687 { 00:26:12.687 "subsystem": "bdev", 00:26:12.687 "config": [ 00:26:12.687 { 00:26:12.687 "method": "bdev_set_options", 00:26:12.687 "params": { 00:26:12.687 "bdev_io_pool_size": 65535, 00:26:12.687 "bdev_io_cache_size": 256, 00:26:12.687 "bdev_auto_examine": true, 00:26:12.687 "iobuf_small_cache_size": 128, 00:26:12.687 "iobuf_large_cache_size": 16 00:26:12.687 } 00:26:12.687 }, 00:26:12.687 { 00:26:12.687 "method": "bdev_raid_set_options", 00:26:12.687 "params": { 00:26:12.687 "process_window_size_kb": 1024, 00:26:12.687 "process_max_bandwidth_mb_sec": 0 00:26:12.687 } 00:26:12.687 }, 00:26:12.687 { 00:26:12.687 "method": "bdev_iscsi_set_options", 00:26:12.687 "params": { 00:26:12.687 "timeout_sec": 30 00:26:12.687 } 00:26:12.687 }, 00:26:12.687 { 00:26:12.687 "method": "bdev_nvme_set_options", 00:26:12.687 "params": { 00:26:12.687 "action_on_timeout": "none", 00:26:12.687 "timeout_us": 0, 00:26:12.687 "timeout_admin_us": 0, 00:26:12.687 "keep_alive_timeout_ms": 10000, 00:26:12.687 "arbitration_burst": 0, 00:26:12.687 "low_priority_weight": 0, 00:26:12.687 "medium_priority_weight": 0, 00:26:12.687 "high_priority_weight": 0, 00:26:12.687 "nvme_adminq_poll_period_us": 10000, 00:26:12.687 "nvme_ioq_poll_period_us": 0, 00:26:12.687 "io_queue_requests": 512, 00:26:12.687 "delay_cmd_submit": true, 00:26:12.687 "transport_retry_count": 4, 00:26:12.687 "bdev_retry_count": 3, 00:26:12.687 "transport_ack_timeout": 0, 00:26:12.687 "ctrlr_loss_timeout_sec": 0, 00:26:12.687 "reconnect_delay_sec": 0, 00:26:12.687 "fast_io_fail_timeout_sec": 0, 00:26:12.687 "disable_auto_failback": false, 00:26:12.687 "generate_uuids": false, 00:26:12.687 "transport_tos": 0, 00:26:12.688 "nvme_error_stat": false, 00:26:12.688 "rdma_srq_size": 0, 00:26:12.688 "io_path_stat": false, 00:26:12.688 "allow_accel_sequence": false, 00:26:12.688 "rdma_max_cq_size": 0, 00:26:12.688 "rdma_cm_event_timeout_ms": 0, 00:26:12.688 "dhchap_digests": [ 00:26:12.688 "sha256", 00:26:12.688 "sha384", 00:26:12.688 "sha512" 00:26:12.688 ], 00:26:12.688 "dhchap_dhgroups": [ 00:26:12.688 "null", 00:26:12.688 "ffdhe2048", 00:26:12.688 "ffdhe3072", 00:26:12.688 "ffdhe4096", 00:26:12.688 "ffdhe6144", 00:26:12.688 "ffdhe8192" 00:26:12.688 ] 00:26:12.688 } 00:26:12.688 }, 00:26:12.688 { 00:26:12.688 "method": "bdev_nvme_attach_controller", 00:26:12.688 "params": { 00:26:12.688 "name": "TLSTEST", 00:26:12.688 "trtype": "TCP", 00:26:12.688 "adrfam": "IPv4", 00:26:12.688 "traddr": "10.0.0.2", 00:26:12.688 "trsvcid": "4420", 00:26:12.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:12.688 "prchk_reftag": false, 00:26:12.688 "prchk_guard": false, 00:26:12.688 "ctrlr_loss_timeout_sec": 0, 00:26:12.688 "reconnect_delay_sec": 0, 00:26:12.688 "fast_io_fail_timeout_sec": 0, 00:26:12.688 "psk": "key0", 00:26:12.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:12.688 "hdgst": false, 00:26:12.688 "ddgst": false 00:26:12.688 } 00:26:12.688 }, 00:26:12.688 { 00:26:12.688 "method": "bdev_nvme_set_hotplug", 00:26:12.688 "params": { 00:26:12.688 "period_us": 100000, 00:26:12.688 "enable": false 00:26:12.688 } 00:26:12.688 }, 00:26:12.688 { 00:26:12.688 "method": "bdev_wait_for_examine" 00:26:12.688 } 00:26:12.688 ] 00:26:12.688 }, 00:26:12.688 { 00:26:12.688 "subsystem": "nbd", 00:26:12.688 "config": [] 00:26:12.688 } 00:26:12.688 ] 00:26:12.688 }' 00:26:12.688 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2723376 00:26:12.688 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2723376 ']' 00:26:12.688 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2723376 00:26:12.688 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:12.688 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:12.688 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2723376 00:26:12.949 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:12.949 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:12.949 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2723376' 00:26:12.949 killing process with pid 2723376 00:26:12.949 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2723376 00:26:12.949 Received shutdown signal, test time was about 10.000000 seconds 00:26:12.949 00:26:12.949 Latency(us) 00:26:12.949 [2024-11-20T16:53:12.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.949 [2024-11-20T16:53:12.865Z] =================================================================================================================== 00:26:12.949 [2024-11-20T16:53:12.865Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:12.949 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2723376 00:26:12.949 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2723015 00:26:12.949 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2723015 ']' 00:26:12.949 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2723015 00:26:12.949 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:12.949 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:12.949 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2723015 00:26:12.949 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:12.949 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:12.949 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2723015' 00:26:12.949 killing process with pid 2723015 00:26:12.949 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2723015 00:26:12.949 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2723015 00:26:13.211 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:26:13.211 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:13.211 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:13.211 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:13.211 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:26:13.211 "subsystems": [ 00:26:13.211 { 00:26:13.211 "subsystem": "keyring", 00:26:13.211 "config": [ 00:26:13.211 { 00:26:13.211 "method": "keyring_file_add_key", 00:26:13.211 "params": { 00:26:13.211 "name": "key0", 00:26:13.211 "path": "/tmp/tmp.nIKuk6pLG3" 00:26:13.211 } 00:26:13.211 } 00:26:13.211 ] 00:26:13.211 }, 00:26:13.211 { 00:26:13.211 "subsystem": "iobuf", 00:26:13.211 "config": [ 00:26:13.211 { 00:26:13.211 "method": "iobuf_set_options", 00:26:13.211 "params": { 00:26:13.211 "small_pool_count": 8192, 00:26:13.211 "large_pool_count": 1024, 00:26:13.211 "small_bufsize": 8192, 00:26:13.211 "large_bufsize": 135168 00:26:13.211 } 00:26:13.211 } 00:26:13.211 ] 00:26:13.211 }, 00:26:13.211 { 00:26:13.211 "subsystem": "sock", 00:26:13.211 "config": [ 00:26:13.211 { 00:26:13.211 "method": "sock_set_default_impl", 00:26:13.211 "params": { 00:26:13.211 "impl_name": "posix" 00:26:13.211 } 00:26:13.211 }, 00:26:13.211 { 00:26:13.211 "method": "sock_impl_set_options", 00:26:13.211 "params": { 00:26:13.211 "impl_name": "ssl", 00:26:13.211 "recv_buf_size": 4096, 00:26:13.211 "send_buf_size": 4096, 00:26:13.211 "enable_recv_pipe": true, 00:26:13.211 "enable_quickack": false, 00:26:13.211 "enable_placement_id": 0, 00:26:13.211 "enable_zerocopy_send_server": true, 00:26:13.211 "enable_zerocopy_send_client": false, 00:26:13.211 "zerocopy_threshold": 0, 00:26:13.211 "tls_version": 0, 00:26:13.211 "enable_ktls": false 00:26:13.211 } 00:26:13.211 }, 00:26:13.211 { 00:26:13.211 "method": "sock_impl_set_options", 00:26:13.211 "params": { 00:26:13.211 "impl_name": "posix", 00:26:13.211 "recv_buf_size": 2097152, 00:26:13.211 "send_buf_size": 2097152, 00:26:13.211 "enable_recv_pipe": true, 00:26:13.211 "enable_quickack": false, 00:26:13.211 "enable_placement_id": 0, 00:26:13.211 "enable_zerocopy_send_server": true, 00:26:13.212 "enable_zerocopy_send_client": false, 00:26:13.212 "zerocopy_threshold": 0, 00:26:13.212 "tls_version": 0, 00:26:13.212 "enable_ktls": false 00:26:13.212 } 00:26:13.212 } 00:26:13.212 ] 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "subsystem": "vmd", 00:26:13.212 "config": [] 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "subsystem": "accel", 00:26:13.212 "config": [ 00:26:13.212 { 00:26:13.212 "method": "accel_set_options", 00:26:13.212 "params": { 00:26:13.212 "small_cache_size": 128, 00:26:13.212 "large_cache_size": 16, 00:26:13.212 "task_count": 2048, 00:26:13.212 "sequence_count": 2048, 00:26:13.212 "buf_count": 2048 00:26:13.212 } 00:26:13.212 } 00:26:13.212 ] 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "subsystem": "bdev", 00:26:13.212 "config": [ 00:26:13.212 { 00:26:13.212 "method": "bdev_set_options", 00:26:13.212 "params": { 00:26:13.212 "bdev_io_pool_size": 65535, 00:26:13.212 "bdev_io_cache_size": 256, 00:26:13.212 "bdev_auto_examine": true, 00:26:13.212 "iobuf_small_cache_size": 128, 00:26:13.212 "iobuf_large_cache_size": 16 00:26:13.212 } 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "method": "bdev_raid_set_options", 00:26:13.212 "params": { 00:26:13.212 "process_window_size_kb": 1024, 00:26:13.212 "process_max_bandwidth_mb_sec": 0 00:26:13.212 } 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "method": "bdev_iscsi_set_options", 00:26:13.212 "params": { 00:26:13.212 "timeout_sec": 30 00:26:13.212 } 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "method": "bdev_nvme_set_options", 00:26:13.212 "params": { 00:26:13.212 "action_on_timeout": "none", 00:26:13.212 "timeout_us": 0, 00:26:13.212 "timeout_admin_us": 0, 00:26:13.212 "keep_alive_timeout_ms": 10000, 00:26:13.212 "arbitration_burst": 0, 00:26:13.212 "low_priority_weight": 0, 00:26:13.212 "medium_priority_weight": 0, 00:26:13.212 "high_priority_weight": 0, 00:26:13.212 "nvme_adminq_poll_period_us": 10000, 00:26:13.212 "nvme_ioq_poll_period_us": 0, 00:26:13.212 "io_queue_requests": 0, 00:26:13.212 "delay_cmd_submit": true, 00:26:13.212 "transport_retry_count": 4, 00:26:13.212 "bdev_retry_count": 3, 00:26:13.212 "transport_ack_timeout": 0, 00:26:13.212 "ctrlr_loss_timeout_sec": 0, 00:26:13.212 "reconnect_delay_sec": 0, 00:26:13.212 "fast_io_fail_timeout_sec": 0, 00:26:13.212 "disable_auto_failback": false, 00:26:13.212 "generate_uuids": false, 00:26:13.212 "transport_tos": 0, 00:26:13.212 "nvme_error_stat": false, 00:26:13.212 "rdma_srq_size": 0, 00:26:13.212 "io_path_stat": false, 00:26:13.212 "allow_accel_sequence": false, 00:26:13.212 "rdma_max_cq_size": 0, 00:26:13.212 "rdma_cm_event_timeout_ms": 0, 00:26:13.212 "dhchap_digests": [ 00:26:13.212 "sha256", 00:26:13.212 "sha384", 00:26:13.212 "sha512" 00:26:13.212 ], 00:26:13.212 "dhchap_dhgroups": [ 00:26:13.212 "null", 00:26:13.212 "ffdhe2048", 00:26:13.212 "ffdhe3072", 00:26:13.212 "ffdhe4096", 00:26:13.212 "ffdhe6144", 00:26:13.212 "ffdhe8192" 00:26:13.212 ] 00:26:13.212 } 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "method": "bdev_nvme_set_hotplug", 00:26:13.212 "params": { 00:26:13.212 "period_us": 100000, 00:26:13.212 "enable": false 00:26:13.212 } 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "method": "bdev_malloc_create", 00:26:13.212 "params": { 00:26:13.212 "name": "malloc0", 00:26:13.212 "num_blocks": 8192, 00:26:13.212 "block_size": 4096, 00:26:13.212 "physical_block_size": 4096, 00:26:13.212 "uuid": "d21e3299-344e-4e66-89c9-68188870ca37", 00:26:13.212 "optimal_io_boundary": 0, 00:26:13.212 "md_size": 0, 00:26:13.212 "dif_type": 0, 00:26:13.212 "dif_is_head_of_md": false, 00:26:13.212 "dif_pi_format": 0 00:26:13.212 } 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "method": "bdev_wait_for_examine" 00:26:13.212 } 00:26:13.212 ] 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "subsystem": "nbd", 00:26:13.212 "config": [] 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "subsystem": "scheduler", 00:26:13.212 "config": [ 00:26:13.212 { 00:26:13.212 "method": "framework_set_scheduler", 00:26:13.212 "params": { 00:26:13.212 "name": "static" 00:26:13.212 } 00:26:13.212 } 00:26:13.212 ] 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "subsystem": "nvmf", 00:26:13.212 "config": [ 00:26:13.212 { 00:26:13.212 "method": "nvmf_set_config", 00:26:13.212 "params": { 00:26:13.212 "discovery_filter": "match_any", 00:26:13.212 "admin_cmd_passthru": { 00:26:13.212 "identify_ctrlr": false 00:26:13.212 }, 00:26:13.212 "dhchap_digests": [ 00:26:13.212 "sha256", 00:26:13.212 "sha384", 00:26:13.212 "sha512" 00:26:13.212 ], 00:26:13.212 "dhchap_dhgroups": [ 00:26:13.212 "null", 00:26:13.212 "ffdhe2048", 00:26:13.212 "ffdhe3072", 00:26:13.212 "ffdhe4096", 00:26:13.212 "ffdhe6144", 00:26:13.212 "ffdhe8192" 00:26:13.212 ] 00:26:13.212 } 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "method": "nvmf_set_max_subsystems", 00:26:13.212 "params": { 00:26:13.212 "max_subsystems": 1024 00:26:13.212 } 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "method": "nvmf_set_crdt", 00:26:13.212 "params": { 00:26:13.212 "crdt1": 0, 00:26:13.212 "crdt2": 0, 00:26:13.212 "crdt3": 0 00:26:13.212 } 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "method": "nvmf_create_transport", 00:26:13.212 "params": { 00:26:13.212 "trtype": "TCP", 00:26:13.212 "max_queue_depth": 128, 00:26:13.212 "max_io_qpairs_per_ctrlr": 127, 00:26:13.212 "in_capsule_data_size": 4096, 00:26:13.212 "max_io_size": 131072, 00:26:13.212 "io_unit_size": 131072, 00:26:13.212 "max_aq_depth": 128, 00:26:13.212 "num_shared_buffers": 511, 00:26:13.212 "buf_cache_size": 4294967295, 00:26:13.212 "dif_insert_or_strip": false, 00:26:13.212 "zcopy": false, 00:26:13.212 "c2h_success": false, 00:26:13.212 "sock_priority": 0, 00:26:13.212 "abort_timeout_sec": 1, 00:26:13.212 "ack_timeout": 0, 00:26:13.212 "data_wr_pool_size": 0 00:26:13.212 } 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "method": "nvmf_create_subsystem", 00:26:13.212 "params": { 00:26:13.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.212 "allow_any_host": false, 00:26:13.212 "serial_number": "SPDK00000000000001", 00:26:13.212 "model_number": "SPDK bdev Controller", 00:26:13.212 "max_namespaces": 10, 00:26:13.212 "min_cntlid": 1, 00:26:13.212 "max_cntlid": 65519, 00:26:13.212 "ana_reporting": false 00:26:13.212 } 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "method": "nvmf_subsystem_add_host", 00:26:13.212 "params": { 00:26:13.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.212 "host": "nqn.2016-06.io.spdk:host1", 00:26:13.212 "psk": "key0" 00:26:13.212 } 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "method": "nvmf_subsystem_add_ns", 00:26:13.212 "params": { 00:26:13.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.212 "namespace": { 00:26:13.212 "nsid": 1, 00:26:13.212 "bdev_name": "malloc0", 00:26:13.212 "nguid": "D21E3299344E4E6689C968188870CA37", 00:26:13.212 "uuid": "d21e3299-344e-4e66-89c9-68188870ca37", 00:26:13.212 "no_auto_visible": false 00:26:13.212 } 00:26:13.212 } 00:26:13.212 }, 00:26:13.212 { 00:26:13.212 "method": "nvmf_subsystem_add_listener", 00:26:13.212 "params": { 00:26:13.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.212 "listen_address": { 00:26:13.212 "trtype": "TCP", 00:26:13.213 "adrfam": "IPv4", 00:26:13.213 "traddr": "10.0.0.2", 00:26:13.213 "trsvcid": "4420" 00:26:13.213 }, 00:26:13.213 "secure_channel": true 00:26:13.213 } 00:26:13.213 } 00:26:13.213 ] 00:26:13.213 } 00:26:13.213 ] 00:26:13.213 }' 00:26:13.213 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=2723719 00:26:13.213 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 2723719 00:26:13.213 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:26:13.213 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2723719 ']' 00:26:13.213 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.213 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:13.213 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.213 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:13.213 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:13.213 [2024-11-20 17:53:12.993506] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:13.213 [2024-11-20 17:53:12.993565] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.213 [2024-11-20 17:53:13.075851] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.213 [2024-11-20 17:53:13.104514] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.213 [2024-11-20 17:53:13.104548] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.213 [2024-11-20 17:53:13.104554] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.213 [2024-11-20 17:53:13.104558] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.213 [2024-11-20 17:53:13.104563] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.213 [2024-11-20 17:53:13.104604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.474 [2024-11-20 17:53:13.300121] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.474 [2024-11-20 17:53:13.332147] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:13.474 [2024-11-20 17:53:13.332347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.046 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:14.046 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:14.047 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:14.047 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:14.047 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:14.047 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.047 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2723862 00:26:14.047 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2723862 /var/tmp/bdevperf.sock 00:26:14.047 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2723862 ']' 00:26:14.047 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:14.047 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:14.047 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:14.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:14.047 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:26:14.047 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:14.047 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:14.047 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:26:14.047 "subsystems": [ 00:26:14.047 { 00:26:14.047 "subsystem": "keyring", 00:26:14.047 "config": [ 00:26:14.047 { 00:26:14.047 "method": "keyring_file_add_key", 00:26:14.047 "params": { 00:26:14.047 "name": "key0", 00:26:14.047 "path": "/tmp/tmp.nIKuk6pLG3" 00:26:14.047 } 00:26:14.047 } 00:26:14.047 ] 00:26:14.047 }, 00:26:14.047 { 00:26:14.047 "subsystem": "iobuf", 00:26:14.047 "config": [ 00:26:14.047 { 00:26:14.047 "method": "iobuf_set_options", 00:26:14.047 "params": { 00:26:14.047 "small_pool_count": 8192, 00:26:14.047 "large_pool_count": 1024, 00:26:14.047 "small_bufsize": 8192, 00:26:14.047 "large_bufsize": 135168 00:26:14.047 } 00:26:14.047 } 00:26:14.047 ] 00:26:14.047 }, 00:26:14.047 { 00:26:14.047 "subsystem": "sock", 00:26:14.047 "config": [ 00:26:14.047 { 00:26:14.047 "method": "sock_set_default_impl", 00:26:14.047 "params": { 00:26:14.047 "impl_name": "posix" 00:26:14.047 } 00:26:14.047 }, 00:26:14.047 { 00:26:14.047 "method": "sock_impl_set_options", 00:26:14.047 "params": { 00:26:14.047 "impl_name": "ssl", 00:26:14.047 "recv_buf_size": 4096, 00:26:14.047 "send_buf_size": 4096, 00:26:14.047 "enable_recv_pipe": true, 00:26:14.047 "enable_quickack": false, 00:26:14.047 "enable_placement_id": 0, 00:26:14.047 "enable_zerocopy_send_server": true, 00:26:14.047 "enable_zerocopy_send_client": false, 00:26:14.047 "zerocopy_threshold": 0, 00:26:14.047 "tls_version": 0, 00:26:14.047 "enable_ktls": false 00:26:14.047 } 00:26:14.047 }, 00:26:14.047 { 00:26:14.047 "method": "sock_impl_set_options", 00:26:14.047 "params": { 00:26:14.047 "impl_name": "posix", 00:26:14.047 "recv_buf_size": 2097152, 00:26:14.047 "send_buf_size": 2097152, 00:26:14.047 "enable_recv_pipe": true, 00:26:14.047 "enable_quickack": false, 00:26:14.047 "enable_placement_id": 0, 00:26:14.047 "enable_zerocopy_send_server": true, 00:26:14.047 "enable_zerocopy_send_client": false, 00:26:14.047 "zerocopy_threshold": 0, 00:26:14.047 "tls_version": 0, 00:26:14.047 "enable_ktls": false 00:26:14.047 } 00:26:14.047 } 00:26:14.047 ] 00:26:14.047 }, 00:26:14.047 { 00:26:14.047 "subsystem": "vmd", 00:26:14.047 "config": [] 00:26:14.047 }, 00:26:14.047 { 00:26:14.047 "subsystem": "accel", 00:26:14.047 "config": [ 00:26:14.047 { 00:26:14.047 "method": "accel_set_options", 00:26:14.047 "params": { 00:26:14.047 "small_cache_size": 128, 00:26:14.047 "large_cache_size": 16, 00:26:14.047 "task_count": 2048, 00:26:14.047 "sequence_count": 2048, 00:26:14.047 "buf_count": 2048 00:26:14.047 } 00:26:14.047 } 00:26:14.047 ] 00:26:14.047 }, 00:26:14.047 { 00:26:14.047 "subsystem": "bdev", 00:26:14.047 "config": [ 00:26:14.047 { 00:26:14.047 "method": "bdev_set_options", 00:26:14.047 "params": { 00:26:14.047 "bdev_io_pool_size": 65535, 00:26:14.047 "bdev_io_cache_size": 256, 00:26:14.047 "bdev_auto_examine": true, 00:26:14.047 "iobuf_small_cache_size": 128, 00:26:14.047 "iobuf_large_cache_size": 16 00:26:14.047 } 00:26:14.047 }, 00:26:14.047 { 00:26:14.047 "method": "bdev_raid_set_options", 00:26:14.047 "params": { 00:26:14.047 "process_window_size_kb": 1024, 00:26:14.047 "process_max_bandwidth_mb_sec": 0 00:26:14.047 } 00:26:14.047 }, 00:26:14.047 { 00:26:14.047 "method": "bdev_iscsi_set_options", 00:26:14.047 "params": { 00:26:14.047 "timeout_sec": 30 00:26:14.047 } 00:26:14.047 }, 00:26:14.047 { 00:26:14.047 "method": "bdev_nvme_set_options", 00:26:14.047 "params": { 00:26:14.047 "action_on_timeout": "none", 00:26:14.047 "timeout_us": 0, 00:26:14.047 "timeout_admin_us": 0, 00:26:14.047 "keep_alive_timeout_ms": 10000, 00:26:14.047 "arbitration_burst": 0, 00:26:14.047 "low_priority_weight": 0, 00:26:14.047 "medium_priority_weight": 0, 00:26:14.047 "high_priority_weight": 0, 00:26:14.047 "nvme_adminq_poll_period_us": 10000, 00:26:14.047 "nvme_ioq_poll_period_us": 0, 00:26:14.047 "io_queue_requests": 512, 00:26:14.047 "delay_cmd_submit": true, 00:26:14.047 "transport_retry_count": 4, 00:26:14.047 "bdev_retry_count": 3, 00:26:14.047 "transport_ack_timeout": 0, 00:26:14.047 "ctrlr_loss_timeout_sec": 0, 00:26:14.047 "reconnect_delay_sec": 0, 00:26:14.047 "fast_io_fail_timeout_sec": 0, 00:26:14.047 "disable_auto_failback": false, 00:26:14.047 "generate_uuids": false, 00:26:14.047 "transport_tos": 0, 00:26:14.047 "nvme_error_stat": false, 00:26:14.047 "rdma_srq_size": 0, 00:26:14.047 "io_path_stat": false, 00:26:14.047 "allow_accel_sequence": false, 00:26:14.047 "rdma_max_cq_size": 0, 00:26:14.047 "rdma_cm_event_timeout_ms": 0, 00:26:14.047 "dhchap_digests": [ 00:26:14.047 "sha256", 00:26:14.047 "sha384", 00:26:14.047 "sha512" 00:26:14.047 ], 00:26:14.047 "dhchap_dhgroups": [ 00:26:14.047 "null", 00:26:14.047 "ffdhe2048", 00:26:14.047 "ffdhe3072", 00:26:14.047 "ffdhe4096", 00:26:14.047 "ffdhe6144", 00:26:14.047 "ffdhe8192" 00:26:14.047 ] 00:26:14.047 } 00:26:14.047 }, 00:26:14.047 { 00:26:14.047 "method": "bdev_nvme_attach_controller", 00:26:14.047 "params": { 00:26:14.047 "name": "TLSTEST", 00:26:14.047 "trtype": "TCP", 00:26:14.047 "adrfam": "IPv4", 00:26:14.047 "traddr": "10.0.0.2", 00:26:14.047 "trsvcid": "4420", 00:26:14.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.047 "prchk_reftag": false, 00:26:14.047 "prchk_guard": false, 00:26:14.047 "ctrlr_loss_timeout_sec": 0, 00:26:14.047 "reconnect_delay_sec": 0, 00:26:14.047 "fast_io_fail_timeout_sec": 0, 00:26:14.047 "psk": "key0", 00:26:14.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:14.047 "hdgst": false, 00:26:14.047 "ddgst": false 00:26:14.047 } 00:26:14.047 }, 00:26:14.047 { 00:26:14.047 "method": "bdev_nvme_set_hotplug", 00:26:14.047 "params": { 00:26:14.047 "period_us": 100000, 00:26:14.047 "enable": false 00:26:14.047 } 00:26:14.047 }, 00:26:14.047 { 00:26:14.047 "method": "bdev_wait_for_examine" 00:26:14.047 } 00:26:14.047 ] 00:26:14.047 }, 00:26:14.047 { 00:26:14.047 "subsystem": "nbd", 00:26:14.048 "config": [] 00:26:14.048 } 00:26:14.048 ] 00:26:14.048 }' 00:26:14.048 [2024-11-20 17:53:13.869042] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:14.048 [2024-11-20 17:53:13.869093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723862 ] 00:26:14.048 [2024-11-20 17:53:13.945614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.309 [2024-11-20 17:53:13.976522] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:14.309 [2024-11-20 17:53:14.109398] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:14.880 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:14.880 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:14.880 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:26:14.880 Running I/O for 10 seconds... 00:26:17.206 5506.00 IOPS, 21.51 MiB/s [2024-11-20T16:53:18.063Z] 5255.50 IOPS, 20.53 MiB/s [2024-11-20T16:53:19.004Z] 5177.00 IOPS, 20.22 MiB/s [2024-11-20T16:53:19.945Z] 5252.50 IOPS, 20.52 MiB/s [2024-11-20T16:53:20.888Z] 5308.20 IOPS, 20.74 MiB/s [2024-11-20T16:53:21.832Z] 5344.83 IOPS, 20.88 MiB/s [2024-11-20T16:53:23.218Z] 5311.71 IOPS, 20.75 MiB/s [2024-11-20T16:53:23.789Z] 5337.38 IOPS, 20.85 MiB/s [2024-11-20T16:53:25.172Z] 5355.22 IOPS, 20.92 MiB/s [2024-11-20T16:53:25.172Z] 5316.00 IOPS, 20.77 MiB/s 00:26:25.256 Latency(us) 00:26:25.256 [2024-11-20T16:53:25.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.256 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:25.256 Verification LBA range: start 0x0 length 0x2000 00:26:25.256 TLSTESTn1 : 10.02 5320.29 20.78 0.00 0.00 24024.46 5106.35 50025.81 00:26:25.256 [2024-11-20T16:53:25.172Z] =================================================================================================================== 00:26:25.256 [2024-11-20T16:53:25.172Z] Total : 5320.29 20.78 0.00 0.00 24024.46 5106.35 50025.81 00:26:25.256 { 00:26:25.256 "results": [ 00:26:25.256 { 00:26:25.256 "job": "TLSTESTn1", 00:26:25.256 "core_mask": "0x4", 00:26:25.256 "workload": "verify", 00:26:25.256 "status": "finished", 00:26:25.256 "verify_range": { 00:26:25.256 "start": 0, 00:26:25.256 "length": 8192 00:26:25.256 }, 00:26:25.256 "queue_depth": 128, 00:26:25.256 "io_size": 4096, 00:26:25.256 "runtime": 10.015807, 00:26:25.256 "iops": 5320.290217253587, 00:26:25.256 "mibps": 20.782383661146824, 00:26:25.256 "io_failed": 0, 00:26:25.256 "io_timeout": 0, 00:26:25.256 "avg_latency_us": 24024.46088239158, 00:26:25.256 "min_latency_us": 5106.346666666666, 00:26:25.256 "max_latency_us": 50025.81333333333 00:26:25.256 } 00:26:25.256 ], 00:26:25.256 "core_count": 1 00:26:25.256 } 00:26:25.256 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:25.256 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2723862 00:26:25.256 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2723862 ']' 00:26:25.256 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2723862 00:26:25.256 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:25.256 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:25.256 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2723862 00:26:25.256 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:25.256 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:25.256 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2723862' 00:26:25.256 killing process with pid 2723862 00:26:25.256 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2723862 00:26:25.256 Received shutdown signal, test time was about 10.000000 seconds 00:26:25.256 00:26:25.256 Latency(us) 00:26:25.256 [2024-11-20T16:53:25.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.256 [2024-11-20T16:53:25.172Z] =================================================================================================================== 00:26:25.256 [2024-11-20T16:53:25.172Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:25.256 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2723862 00:26:25.256 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2723719 00:26:25.256 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2723719 ']' 00:26:25.256 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2723719 00:26:25.256 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:25.256 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:25.256 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2723719 00:26:25.256 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:25.256 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:25.256 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2723719' 00:26:25.256 killing process with pid 2723719 00:26:25.257 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2723719 00:26:25.257 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2723719 00:26:25.517 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:26:25.517 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:25.517 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:25.517 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:25.517 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=2726054 00:26:25.517 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 2726054 00:26:25.517 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:25.517 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2726054 ']' 00:26:25.517 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.517 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:25.517 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.517 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:25.517 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:25.517 [2024-11-20 17:53:25.236515] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:25.517 [2024-11-20 17:53:25.236571] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.517 [2024-11-20 17:53:25.320361] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.517 [2024-11-20 17:53:25.360248] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.517 [2024-11-20 17:53:25.360297] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.517 [2024-11-20 17:53:25.360305] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.517 [2024-11-20 17:53:25.360311] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.517 [2024-11-20 17:53:25.360317] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.517 [2024-11-20 17:53:25.360338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.459 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:26.459 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:26.459 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:26.459 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:26.459 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:26.459 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.459 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.nIKuk6pLG3 00:26:26.459 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.nIKuk6pLG3 00:26:26.459 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:26.459 [2024-11-20 17:53:26.254462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.459 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:26.721 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:26.982 [2024-11-20 17:53:26.639426] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:26.982 [2024-11-20 17:53:26.639780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.982 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:26.982 malloc0 00:26:26.982 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:27.244 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.nIKuk6pLG3 00:26:27.505 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:26:27.767 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2726436 00:26:27.767 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:27.767 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:27.767 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2726436 /var/tmp/bdevperf.sock 00:26:27.767 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2726436 ']' 00:26:27.767 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:27.767 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:27.767 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:27.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:27.767 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:27.767 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:27.767 [2024-11-20 17:53:27.508199] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:27.767 [2024-11-20 17:53:27.508270] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2726436 ] 00:26:27.767 [2024-11-20 17:53:27.588752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.767 [2024-11-20 17:53:27.622776] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.710 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:28.710 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:28.710 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nIKuk6pLG3 00:26:28.710 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:28.971 [2024-11-20 17:53:28.631428] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:28.971 nvme0n1 00:26:28.971 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:28.971 Running I/O for 1 seconds... 00:26:29.911 4901.00 IOPS, 19.14 MiB/s 00:26:29.911 Latency(us) 00:26:29.911 [2024-11-20T16:53:29.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.911 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:29.911 Verification LBA range: start 0x0 length 0x2000 00:26:29.911 nvme0n1 : 1.02 4947.51 19.33 0.00 0.00 25686.69 4696.75 65972.91 00:26:29.911 [2024-11-20T16:53:29.827Z] =================================================================================================================== 00:26:29.911 [2024-11-20T16:53:29.827Z] Total : 4947.51 19.33 0.00 0.00 25686.69 4696.75 65972.91 00:26:30.172 { 00:26:30.172 "results": [ 00:26:30.172 { 00:26:30.172 "job": "nvme0n1", 00:26:30.172 "core_mask": "0x2", 00:26:30.172 "workload": "verify", 00:26:30.172 "status": "finished", 00:26:30.172 "verify_range": { 00:26:30.172 "start": 0, 00:26:30.172 "length": 8192 00:26:30.172 }, 00:26:30.172 "queue_depth": 128, 00:26:30.172 "io_size": 4096, 00:26:30.172 "runtime": 1.016471, 00:26:30.172 "iops": 4947.509569874595, 00:26:30.172 "mibps": 19.32620925732264, 00:26:30.172 "io_failed": 0, 00:26:30.172 "io_timeout": 0, 00:26:30.172 "avg_latency_us": 25686.690114668258, 00:26:30.172 "min_latency_us": 4696.746666666667, 00:26:30.172 "max_latency_us": 65972.90666666666 00:26:30.172 } 00:26:30.172 ], 00:26:30.172 "core_count": 1 00:26:30.172 } 00:26:30.172 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2726436 00:26:30.172 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2726436 ']' 00:26:30.172 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2726436 00:26:30.172 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:30.172 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:30.172 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2726436 00:26:30.172 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:30.172 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:30.172 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2726436' 00:26:30.172 killing process with pid 2726436 00:26:30.172 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2726436 00:26:30.172 Received shutdown signal, test time was about 1.000000 seconds 00:26:30.172 00:26:30.172 Latency(us) 00:26:30.172 [2024-11-20T16:53:30.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.172 [2024-11-20T16:53:30.088Z] =================================================================================================================== 00:26:30.172 [2024-11-20T16:53:30.088Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:30.172 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2726436 00:26:30.172 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2726054 00:26:30.172 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2726054 ']' 00:26:30.172 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2726054 00:26:30.172 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:30.172 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:30.172 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2726054 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2726054' 00:26:30.434 killing process with pid 2726054 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2726054 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2726054 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=2727099 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 2727099 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2727099 ']' 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:30.434 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:30.434 [2024-11-20 17:53:30.292655] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:30.434 [2024-11-20 17:53:30.292716] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.695 [2024-11-20 17:53:30.373017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.695 [2024-11-20 17:53:30.401427] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.695 [2024-11-20 17:53:30.401462] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.695 [2024-11-20 17:53:30.401467] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.695 [2024-11-20 17:53:30.401472] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.695 [2024-11-20 17:53:30.401476] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.695 [2024-11-20 17:53:30.401491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.267 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:31.267 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:31.267 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:31.267 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:31.267 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:31.267 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:31.267 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:26:31.267 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.267 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:31.267 [2024-11-20 17:53:31.127220] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.267 malloc0 00:26:31.267 [2024-11-20 17:53:31.164521] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:31.267 [2024-11-20 17:53:31.164714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.529 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.529 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2727215 00:26:31.529 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2727215 /var/tmp/bdevperf.sock 00:26:31.529 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:31.529 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2727215 ']' 00:26:31.529 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:31.529 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:31.529 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:31.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:31.529 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:31.529 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:31.529 [2024-11-20 17:53:31.250078] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:31.529 [2024-11-20 17:53:31.250129] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2727215 ] 00:26:31.529 [2024-11-20 17:53:31.325181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.529 [2024-11-20 17:53:31.353534] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.472 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:32.472 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:32.472 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nIKuk6pLG3 00:26:32.472 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:32.472 [2024-11-20 17:53:32.323309] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:32.733 nvme0n1 00:26:32.733 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:32.733 Running I/O for 1 seconds... 00:26:33.675 5100.00 IOPS, 19.92 MiB/s 00:26:33.675 Latency(us) 00:26:33.675 [2024-11-20T16:53:33.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.675 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:33.675 Verification LBA range: start 0x0 length 0x2000 00:26:33.675 nvme0n1 : 1.02 5146.15 20.10 0.00 0.00 24728.20 4669.44 75584.85 00:26:33.675 [2024-11-20T16:53:33.591Z] =================================================================================================================== 00:26:33.675 [2024-11-20T16:53:33.591Z] Total : 5146.15 20.10 0.00 0.00 24728.20 4669.44 75584.85 00:26:33.675 { 00:26:33.675 "results": [ 00:26:33.675 { 00:26:33.675 "job": "nvme0n1", 00:26:33.675 "core_mask": "0x2", 00:26:33.675 "workload": "verify", 00:26:33.675 "status": "finished", 00:26:33.675 "verify_range": { 00:26:33.675 "start": 0, 00:26:33.675 "length": 8192 00:26:33.675 }, 00:26:33.675 "queue_depth": 128, 00:26:33.675 "io_size": 4096, 00:26:33.675 "runtime": 1.015906, 00:26:33.675 "iops": 5146.145411091184, 00:26:33.675 "mibps": 20.102130512074936, 00:26:33.675 "io_failed": 0, 00:26:33.675 "io_timeout": 0, 00:26:33.675 "avg_latency_us": 24728.200193828106, 00:26:33.675 "min_latency_us": 4669.44, 00:26:33.675 "max_latency_us": 75584.85333333333 00:26:33.675 } 00:26:33.675 ], 00:26:33.675 "core_count": 1 00:26:33.675 } 00:26:33.675 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:26:33.675 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.675 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:33.936 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.936 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:26:33.936 "subsystems": [ 00:26:33.936 { 00:26:33.936 "subsystem": "keyring", 00:26:33.936 "config": [ 00:26:33.936 { 00:26:33.936 "method": "keyring_file_add_key", 00:26:33.936 "params": { 00:26:33.936 "name": "key0", 00:26:33.936 "path": "/tmp/tmp.nIKuk6pLG3" 00:26:33.936 } 00:26:33.936 } 00:26:33.936 ] 00:26:33.936 }, 00:26:33.936 { 00:26:33.936 "subsystem": "iobuf", 00:26:33.936 "config": [ 00:26:33.936 { 00:26:33.936 "method": "iobuf_set_options", 00:26:33.936 "params": { 00:26:33.936 "small_pool_count": 8192, 00:26:33.936 "large_pool_count": 1024, 00:26:33.936 "small_bufsize": 8192, 00:26:33.936 "large_bufsize": 135168 00:26:33.936 } 00:26:33.936 } 00:26:33.936 ] 00:26:33.936 }, 00:26:33.936 { 00:26:33.936 "subsystem": "sock", 00:26:33.936 "config": [ 00:26:33.936 { 00:26:33.936 "method": "sock_set_default_impl", 00:26:33.936 "params": { 00:26:33.936 "impl_name": "posix" 00:26:33.936 } 00:26:33.936 }, 00:26:33.936 { 00:26:33.936 "method": "sock_impl_set_options", 00:26:33.936 "params": { 00:26:33.936 "impl_name": "ssl", 00:26:33.936 "recv_buf_size": 4096, 00:26:33.936 "send_buf_size": 4096, 00:26:33.936 "enable_recv_pipe": true, 00:26:33.936 "enable_quickack": false, 00:26:33.936 "enable_placement_id": 0, 00:26:33.936 "enable_zerocopy_send_server": true, 00:26:33.936 "enable_zerocopy_send_client": false, 00:26:33.936 "zerocopy_threshold": 0, 00:26:33.936 "tls_version": 0, 00:26:33.936 "enable_ktls": false 00:26:33.936 } 00:26:33.936 }, 00:26:33.936 { 00:26:33.936 "method": "sock_impl_set_options", 00:26:33.936 "params": { 00:26:33.936 "impl_name": "posix", 00:26:33.936 "recv_buf_size": 2097152, 00:26:33.936 "send_buf_size": 2097152, 00:26:33.936 "enable_recv_pipe": true, 00:26:33.936 "enable_quickack": false, 00:26:33.936 "enable_placement_id": 0, 00:26:33.936 "enable_zerocopy_send_server": true, 00:26:33.936 "enable_zerocopy_send_client": false, 00:26:33.936 "zerocopy_threshold": 0, 00:26:33.936 "tls_version": 0, 00:26:33.936 "enable_ktls": false 00:26:33.936 } 00:26:33.936 } 00:26:33.936 ] 00:26:33.936 }, 00:26:33.936 { 00:26:33.936 "subsystem": "vmd", 00:26:33.936 "config": [] 00:26:33.936 }, 00:26:33.936 { 00:26:33.936 "subsystem": "accel", 00:26:33.936 "config": [ 00:26:33.936 { 00:26:33.936 "method": "accel_set_options", 00:26:33.936 "params": { 00:26:33.936 "small_cache_size": 128, 00:26:33.936 "large_cache_size": 16, 00:26:33.936 "task_count": 2048, 00:26:33.936 "sequence_count": 2048, 00:26:33.936 "buf_count": 2048 00:26:33.936 } 00:26:33.936 } 00:26:33.936 ] 00:26:33.936 }, 00:26:33.936 { 00:26:33.936 "subsystem": "bdev", 00:26:33.936 "config": [ 00:26:33.936 { 00:26:33.936 "method": "bdev_set_options", 00:26:33.936 "params": { 00:26:33.936 "bdev_io_pool_size": 65535, 00:26:33.936 "bdev_io_cache_size": 256, 00:26:33.936 "bdev_auto_examine": true, 00:26:33.936 "iobuf_small_cache_size": 128, 00:26:33.936 "iobuf_large_cache_size": 16 00:26:33.936 } 00:26:33.936 }, 00:26:33.936 { 00:26:33.936 "method": "bdev_raid_set_options", 00:26:33.936 "params": { 00:26:33.936 "process_window_size_kb": 1024, 00:26:33.936 "process_max_bandwidth_mb_sec": 0 00:26:33.936 } 00:26:33.936 }, 00:26:33.936 { 00:26:33.936 "method": "bdev_iscsi_set_options", 00:26:33.936 "params": { 00:26:33.936 "timeout_sec": 30 00:26:33.936 } 00:26:33.936 }, 00:26:33.936 { 00:26:33.936 "method": "bdev_nvme_set_options", 00:26:33.936 "params": { 00:26:33.936 "action_on_timeout": "none", 00:26:33.936 "timeout_us": 0, 00:26:33.936 "timeout_admin_us": 0, 00:26:33.936 "keep_alive_timeout_ms": 10000, 00:26:33.936 "arbitration_burst": 0, 00:26:33.936 "low_priority_weight": 0, 00:26:33.936 "medium_priority_weight": 0, 00:26:33.936 "high_priority_weight": 0, 00:26:33.936 "nvme_adminq_poll_period_us": 10000, 00:26:33.936 "nvme_ioq_poll_period_us": 0, 00:26:33.936 "io_queue_requests": 0, 00:26:33.936 "delay_cmd_submit": true, 00:26:33.936 "transport_retry_count": 4, 00:26:33.936 "bdev_retry_count": 3, 00:26:33.936 "transport_ack_timeout": 0, 00:26:33.936 "ctrlr_loss_timeout_sec": 0, 00:26:33.936 "reconnect_delay_sec": 0, 00:26:33.936 "fast_io_fail_timeout_sec": 0, 00:26:33.936 "disable_auto_failback": false, 00:26:33.936 "generate_uuids": false, 00:26:33.936 "transport_tos": 0, 00:26:33.936 "nvme_error_stat": false, 00:26:33.936 "rdma_srq_size": 0, 00:26:33.936 "io_path_stat": false, 00:26:33.937 "allow_accel_sequence": false, 00:26:33.937 "rdma_max_cq_size": 0, 00:26:33.937 "rdma_cm_event_timeout_ms": 0, 00:26:33.937 "dhchap_digests": [ 00:26:33.937 "sha256", 00:26:33.937 "sha384", 00:26:33.937 "sha512" 00:26:33.937 ], 00:26:33.937 "dhchap_dhgroups": [ 00:26:33.937 "null", 00:26:33.937 "ffdhe2048", 00:26:33.937 "ffdhe3072", 00:26:33.937 "ffdhe4096", 00:26:33.937 "ffdhe6144", 00:26:33.937 "ffdhe8192" 00:26:33.937 ] 00:26:33.937 } 00:26:33.937 }, 00:26:33.937 { 00:26:33.937 "method": "bdev_nvme_set_hotplug", 00:26:33.937 "params": { 00:26:33.937 "period_us": 100000, 00:26:33.937 "enable": false 00:26:33.937 } 00:26:33.937 }, 00:26:33.937 { 00:26:33.937 "method": "bdev_malloc_create", 00:26:33.937 "params": { 00:26:33.937 "name": "malloc0", 00:26:33.937 "num_blocks": 8192, 00:26:33.937 "block_size": 4096, 00:26:33.937 "physical_block_size": 4096, 00:26:33.937 "uuid": "2d865d9c-bebe-46f7-a500-9cbd4a59036a", 00:26:33.937 "optimal_io_boundary": 0, 00:26:33.937 "md_size": 0, 00:26:33.937 "dif_type": 0, 00:26:33.937 "dif_is_head_of_md": false, 00:26:33.937 "dif_pi_format": 0 00:26:33.937 } 00:26:33.937 }, 00:26:33.937 { 00:26:33.937 "method": "bdev_wait_for_examine" 00:26:33.937 } 00:26:33.937 ] 00:26:33.937 }, 00:26:33.937 { 00:26:33.937 "subsystem": "nbd", 00:26:33.937 "config": [] 00:26:33.937 }, 00:26:33.937 { 00:26:33.937 "subsystem": "scheduler", 00:26:33.937 "config": [ 00:26:33.937 { 00:26:33.937 "method": "framework_set_scheduler", 00:26:33.937 "params": { 00:26:33.937 "name": "static" 00:26:33.937 } 00:26:33.937 } 00:26:33.937 ] 00:26:33.937 }, 00:26:33.937 { 00:26:33.937 "subsystem": "nvmf", 00:26:33.937 "config": [ 00:26:33.937 { 00:26:33.937 "method": "nvmf_set_config", 00:26:33.937 "params": { 00:26:33.937 "discovery_filter": "match_any", 00:26:33.937 "admin_cmd_passthru": { 00:26:33.937 "identify_ctrlr": false 00:26:33.937 }, 00:26:33.937 "dhchap_digests": [ 00:26:33.937 "sha256", 00:26:33.937 "sha384", 00:26:33.937 "sha512" 00:26:33.937 ], 00:26:33.937 "dhchap_dhgroups": [ 00:26:33.937 "null", 00:26:33.937 "ffdhe2048", 00:26:33.937 "ffdhe3072", 00:26:33.937 "ffdhe4096", 00:26:33.937 "ffdhe6144", 00:26:33.937 "ffdhe8192" 00:26:33.937 ] 00:26:33.937 } 00:26:33.937 }, 00:26:33.937 { 00:26:33.937 "method": "nvmf_set_max_subsystems", 00:26:33.937 "params": { 00:26:33.937 "max_subsystems": 1024 00:26:33.937 } 00:26:33.937 }, 00:26:33.937 { 00:26:33.937 "method": "nvmf_set_crdt", 00:26:33.937 "params": { 00:26:33.937 "crdt1": 0, 00:26:33.937 "crdt2": 0, 00:26:33.937 "crdt3": 0 00:26:33.937 } 00:26:33.937 }, 00:26:33.937 { 00:26:33.937 "method": "nvmf_create_transport", 00:26:33.937 "params": { 00:26:33.937 "trtype": "TCP", 00:26:33.937 "max_queue_depth": 128, 00:26:33.937 "max_io_qpairs_per_ctrlr": 127, 00:26:33.937 "in_capsule_data_size": 4096, 00:26:33.937 "max_io_size": 131072, 00:26:33.937 "io_unit_size": 131072, 00:26:33.937 "max_aq_depth": 128, 00:26:33.937 "num_shared_buffers": 511, 00:26:33.937 "buf_cache_size": 4294967295, 00:26:33.937 "dif_insert_or_strip": false, 00:26:33.937 "zcopy": false, 00:26:33.937 "c2h_success": false, 00:26:33.937 "sock_priority": 0, 00:26:33.937 "abort_timeout_sec": 1, 00:26:33.937 "ack_timeout": 0, 00:26:33.937 "data_wr_pool_size": 0 00:26:33.937 } 00:26:33.937 }, 00:26:33.937 { 00:26:33.937 "method": "nvmf_create_subsystem", 00:26:33.937 "params": { 00:26:33.937 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:33.937 "allow_any_host": false, 00:26:33.937 "serial_number": "00000000000000000000", 00:26:33.937 "model_number": "SPDK bdev Controller", 00:26:33.937 "max_namespaces": 32, 00:26:33.937 "min_cntlid": 1, 00:26:33.937 "max_cntlid": 65519, 00:26:33.937 "ana_reporting": false 00:26:33.937 } 00:26:33.937 }, 00:26:33.937 { 00:26:33.937 "method": "nvmf_subsystem_add_host", 00:26:33.937 "params": { 00:26:33.937 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:33.937 "host": "nqn.2016-06.io.spdk:host1", 00:26:33.937 "psk": "key0" 00:26:33.937 } 00:26:33.937 }, 00:26:33.937 { 00:26:33.937 "method": "nvmf_subsystem_add_ns", 00:26:33.937 "params": { 00:26:33.937 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:33.937 "namespace": { 00:26:33.937 "nsid": 1, 00:26:33.937 "bdev_name": "malloc0", 00:26:33.937 "nguid": "2D865D9CBEBE46F7A5009CBD4A59036A", 00:26:33.937 "uuid": "2d865d9c-bebe-46f7-a500-9cbd4a59036a", 00:26:33.937 "no_auto_visible": false 00:26:33.937 } 00:26:33.937 } 00:26:33.937 }, 00:26:33.937 { 00:26:33.937 "method": "nvmf_subsystem_add_listener", 00:26:33.937 "params": { 00:26:33.937 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:33.937 "listen_address": { 00:26:33.937 "trtype": "TCP", 00:26:33.937 "adrfam": "IPv4", 00:26:33.937 "traddr": "10.0.0.2", 00:26:33.937 "trsvcid": "4420" 00:26:33.937 }, 00:26:33.937 "secure_channel": false, 00:26:33.937 "sock_impl": "ssl" 00:26:33.937 } 00:26:33.937 } 00:26:33.937 ] 00:26:33.937 } 00:26:33.937 ] 00:26:33.937 }' 00:26:33.937 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:34.199 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:26:34.199 "subsystems": [ 00:26:34.199 { 00:26:34.199 "subsystem": "keyring", 00:26:34.199 "config": [ 00:26:34.199 { 00:26:34.199 "method": "keyring_file_add_key", 00:26:34.199 "params": { 00:26:34.199 "name": "key0", 00:26:34.199 "path": "/tmp/tmp.nIKuk6pLG3" 00:26:34.199 } 00:26:34.199 } 00:26:34.199 ] 00:26:34.199 }, 00:26:34.199 { 00:26:34.199 "subsystem": "iobuf", 00:26:34.199 "config": [ 00:26:34.199 { 00:26:34.199 "method": "iobuf_set_options", 00:26:34.199 "params": { 00:26:34.199 "small_pool_count": 8192, 00:26:34.199 "large_pool_count": 1024, 00:26:34.199 "small_bufsize": 8192, 00:26:34.199 "large_bufsize": 135168 00:26:34.199 } 00:26:34.199 } 00:26:34.199 ] 00:26:34.199 }, 00:26:34.199 { 00:26:34.199 "subsystem": "sock", 00:26:34.199 "config": [ 00:26:34.199 { 00:26:34.199 "method": "sock_set_default_impl", 00:26:34.199 "params": { 00:26:34.199 "impl_name": "posix" 00:26:34.199 } 00:26:34.199 }, 00:26:34.199 { 00:26:34.199 "method": "sock_impl_set_options", 00:26:34.199 "params": { 00:26:34.199 "impl_name": "ssl", 00:26:34.199 "recv_buf_size": 4096, 00:26:34.199 "send_buf_size": 4096, 00:26:34.199 "enable_recv_pipe": true, 00:26:34.199 "enable_quickack": false, 00:26:34.199 "enable_placement_id": 0, 00:26:34.199 "enable_zerocopy_send_server": true, 00:26:34.199 "enable_zerocopy_send_client": false, 00:26:34.199 "zerocopy_threshold": 0, 00:26:34.199 "tls_version": 0, 00:26:34.199 "enable_ktls": false 00:26:34.199 } 00:26:34.199 }, 00:26:34.199 { 00:26:34.199 "method": "sock_impl_set_options", 00:26:34.199 "params": { 00:26:34.199 "impl_name": "posix", 00:26:34.199 "recv_buf_size": 2097152, 00:26:34.199 "send_buf_size": 2097152, 00:26:34.199 "enable_recv_pipe": true, 00:26:34.199 "enable_quickack": false, 00:26:34.199 "enable_placement_id": 0, 00:26:34.199 "enable_zerocopy_send_server": true, 00:26:34.199 "enable_zerocopy_send_client": false, 00:26:34.199 "zerocopy_threshold": 0, 00:26:34.199 "tls_version": 0, 00:26:34.199 "enable_ktls": false 00:26:34.199 } 00:26:34.199 } 00:26:34.199 ] 00:26:34.199 }, 00:26:34.199 { 00:26:34.199 "subsystem": "vmd", 00:26:34.199 "config": [] 00:26:34.199 }, 00:26:34.199 { 00:26:34.199 "subsystem": "accel", 00:26:34.199 "config": [ 00:26:34.199 { 00:26:34.199 "method": "accel_set_options", 00:26:34.199 "params": { 00:26:34.199 "small_cache_size": 128, 00:26:34.199 "large_cache_size": 16, 00:26:34.199 "task_count": 2048, 00:26:34.199 "sequence_count": 2048, 00:26:34.199 "buf_count": 2048 00:26:34.199 } 00:26:34.199 } 00:26:34.199 ] 00:26:34.200 }, 00:26:34.200 { 00:26:34.200 "subsystem": "bdev", 00:26:34.200 "config": [ 00:26:34.200 { 00:26:34.200 "method": "bdev_set_options", 00:26:34.200 "params": { 00:26:34.200 "bdev_io_pool_size": 65535, 00:26:34.200 "bdev_io_cache_size": 256, 00:26:34.200 "bdev_auto_examine": true, 00:26:34.200 "iobuf_small_cache_size": 128, 00:26:34.200 "iobuf_large_cache_size": 16 00:26:34.200 } 00:26:34.200 }, 00:26:34.200 { 00:26:34.200 "method": "bdev_raid_set_options", 00:26:34.200 "params": { 00:26:34.200 "process_window_size_kb": 1024, 00:26:34.200 "process_max_bandwidth_mb_sec": 0 00:26:34.200 } 00:26:34.200 }, 00:26:34.200 { 00:26:34.200 "method": "bdev_iscsi_set_options", 00:26:34.200 "params": { 00:26:34.200 "timeout_sec": 30 00:26:34.200 } 00:26:34.200 }, 00:26:34.200 { 00:26:34.200 "method": "bdev_nvme_set_options", 00:26:34.200 "params": { 00:26:34.200 "action_on_timeout": "none", 00:26:34.200 "timeout_us": 0, 00:26:34.200 "timeout_admin_us": 0, 00:26:34.200 "keep_alive_timeout_ms": 10000, 00:26:34.200 "arbitration_burst": 0, 00:26:34.200 "low_priority_weight": 0, 00:26:34.200 "medium_priority_weight": 0, 00:26:34.200 "high_priority_weight": 0, 00:26:34.200 "nvme_adminq_poll_period_us": 10000, 00:26:34.200 "nvme_ioq_poll_period_us": 0, 00:26:34.200 "io_queue_requests": 512, 00:26:34.200 "delay_cmd_submit": true, 00:26:34.200 "transport_retry_count": 4, 00:26:34.200 "bdev_retry_count": 3, 00:26:34.200 "transport_ack_timeout": 0, 00:26:34.200 "ctrlr_loss_timeout_sec": 0, 00:26:34.200 "reconnect_delay_sec": 0, 00:26:34.200 "fast_io_fail_timeout_sec": 0, 00:26:34.200 "disable_auto_failback": false, 00:26:34.200 "generate_uuids": false, 00:26:34.200 "transport_tos": 0, 00:26:34.200 "nvme_error_stat": false, 00:26:34.200 "rdma_srq_size": 0, 00:26:34.200 "io_path_stat": false, 00:26:34.200 "allow_accel_sequence": false, 00:26:34.200 "rdma_max_cq_size": 0, 00:26:34.200 "rdma_cm_event_timeout_ms": 0, 00:26:34.200 "dhchap_digests": [ 00:26:34.200 "sha256", 00:26:34.200 "sha384", 00:26:34.200 "sha512" 00:26:34.200 ], 00:26:34.200 "dhchap_dhgroups": [ 00:26:34.200 "null", 00:26:34.200 "ffdhe2048", 00:26:34.200 "ffdhe3072", 00:26:34.200 "ffdhe4096", 00:26:34.200 "ffdhe6144", 00:26:34.200 "ffdhe8192" 00:26:34.200 ] 00:26:34.200 } 00:26:34.200 }, 00:26:34.200 { 00:26:34.200 "method": "bdev_nvme_attach_controller", 00:26:34.200 "params": { 00:26:34.200 "name": "nvme0", 00:26:34.200 "trtype": "TCP", 00:26:34.200 "adrfam": "IPv4", 00:26:34.200 "traddr": "10.0.0.2", 00:26:34.200 "trsvcid": "4420", 00:26:34.200 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:34.200 "prchk_reftag": false, 00:26:34.200 "prchk_guard": false, 00:26:34.200 "ctrlr_loss_timeout_sec": 0, 00:26:34.200 "reconnect_delay_sec": 0, 00:26:34.200 "fast_io_fail_timeout_sec": 0, 00:26:34.200 "psk": "key0", 00:26:34.200 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:34.200 "hdgst": false, 00:26:34.200 "ddgst": false 00:26:34.200 } 00:26:34.200 }, 00:26:34.200 { 00:26:34.200 "method": "bdev_nvme_set_hotplug", 00:26:34.200 "params": { 00:26:34.200 "period_us": 100000, 00:26:34.200 "enable": false 00:26:34.200 } 00:26:34.200 }, 00:26:34.200 { 00:26:34.200 "method": "bdev_enable_histogram", 00:26:34.200 "params": { 00:26:34.200 "name": "nvme0n1", 00:26:34.200 "enable": true 00:26:34.200 } 00:26:34.200 }, 00:26:34.200 { 00:26:34.200 "method": "bdev_wait_for_examine" 00:26:34.200 } 00:26:34.200 ] 00:26:34.200 }, 00:26:34.200 { 00:26:34.200 "subsystem": "nbd", 00:26:34.200 "config": [] 00:26:34.200 } 00:26:34.200 ] 00:26:34.200 }' 00:26:34.200 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2727215 00:26:34.200 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2727215 ']' 00:26:34.200 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2727215 00:26:34.200 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:34.200 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:34.200 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2727215 00:26:34.200 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:34.200 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:34.200 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2727215' 00:26:34.200 killing process with pid 2727215 00:26:34.200 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2727215 00:26:34.200 Received shutdown signal, test time was about 1.000000 seconds 00:26:34.200 00:26:34.200 Latency(us) 00:26:34.200 [2024-11-20T16:53:34.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.200 [2024-11-20T16:53:34.116Z] =================================================================================================================== 00:26:34.200 [2024-11-20T16:53:34.116Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:34.200 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2727215 00:26:34.200 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2727099 00:26:34.200 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2727099 ']' 00:26:34.200 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2727099 00:26:34.200 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:34.200 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:34.200 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2727099 00:26:34.461 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:34.461 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:34.461 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2727099' 00:26:34.461 killing process with pid 2727099 00:26:34.461 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2727099 00:26:34.461 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2727099 00:26:34.462 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:26:34.462 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:34.462 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:34.462 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:26:34.462 "subsystems": [ 00:26:34.462 { 00:26:34.462 "subsystem": "keyring", 00:26:34.462 "config": [ 00:26:34.462 { 00:26:34.462 "method": "keyring_file_add_key", 00:26:34.462 "params": { 00:26:34.462 "name": "key0", 00:26:34.462 "path": "/tmp/tmp.nIKuk6pLG3" 00:26:34.462 } 00:26:34.462 } 00:26:34.462 ] 00:26:34.462 }, 00:26:34.462 { 00:26:34.462 "subsystem": "iobuf", 00:26:34.462 "config": [ 00:26:34.462 { 00:26:34.462 "method": "iobuf_set_options", 00:26:34.462 "params": { 00:26:34.462 "small_pool_count": 8192, 00:26:34.462 "large_pool_count": 1024, 00:26:34.462 "small_bufsize": 8192, 00:26:34.462 "large_bufsize": 135168 00:26:34.462 } 00:26:34.462 } 00:26:34.462 ] 00:26:34.462 }, 00:26:34.462 { 00:26:34.462 "subsystem": "sock", 00:26:34.462 "config": [ 00:26:34.462 { 00:26:34.462 "method": "sock_set_default_impl", 00:26:34.462 "params": { 00:26:34.462 "impl_name": "posix" 00:26:34.462 } 00:26:34.462 }, 00:26:34.462 { 00:26:34.462 "method": "sock_impl_set_options", 00:26:34.462 "params": { 00:26:34.462 "impl_name": "ssl", 00:26:34.462 "recv_buf_size": 4096, 00:26:34.462 "send_buf_size": 4096, 00:26:34.462 "enable_recv_pipe": true, 00:26:34.462 "enable_quickack": false, 00:26:34.462 "enable_placement_id": 0, 00:26:34.462 "enable_zerocopy_send_server": true, 00:26:34.462 "enable_zerocopy_send_client": false, 00:26:34.462 "zerocopy_threshold": 0, 00:26:34.462 "tls_version": 0, 00:26:34.462 "enable_ktls": false 00:26:34.462 } 00:26:34.462 }, 00:26:34.462 { 00:26:34.462 "method": "sock_impl_set_options", 00:26:34.462 "params": { 00:26:34.462 "impl_name": "posix", 00:26:34.462 "recv_buf_size": 2097152, 00:26:34.462 "send_buf_size": 2097152, 00:26:34.462 "enable_recv_pipe": true, 00:26:34.462 "enable_quickack": false, 00:26:34.462 "enable_placement_id": 0, 00:26:34.462 "enable_zerocopy_send_server": true, 00:26:34.462 "enable_zerocopy_send_client": false, 00:26:34.462 "zerocopy_threshold": 0, 00:26:34.462 "tls_version": 0, 00:26:34.462 "enable_ktls": false 00:26:34.462 } 00:26:34.462 } 00:26:34.462 ] 00:26:34.462 }, 00:26:34.462 { 00:26:34.462 "subsystem": "vmd", 00:26:34.462 "config": [] 00:26:34.462 }, 00:26:34.462 { 00:26:34.462 "subsystem": "accel", 00:26:34.462 "config": [ 00:26:34.462 { 00:26:34.462 "method": "accel_set_options", 00:26:34.462 "params": { 00:26:34.462 "small_cache_size": 128, 00:26:34.462 "large_cache_size": 16, 00:26:34.462 "task_count": 2048, 00:26:34.462 "sequence_count": 2048, 00:26:34.462 "buf_count": 2048 00:26:34.462 } 00:26:34.462 } 00:26:34.462 ] 00:26:34.462 }, 00:26:34.462 { 00:26:34.462 "subsystem": "bdev", 00:26:34.462 "config": [ 00:26:34.462 { 00:26:34.462 "method": "bdev_set_options", 00:26:34.462 "params": { 00:26:34.462 "bdev_io_pool_size": 65535, 00:26:34.462 "bdev_io_cache_size": 256, 00:26:34.462 "bdev_auto_examine": true, 00:26:34.462 "iobuf_small_cache_size": 128, 00:26:34.462 "iobuf_large_cache_size": 16 00:26:34.462 } 00:26:34.462 }, 00:26:34.462 { 00:26:34.462 "method": "bdev_raid_set_options", 00:26:34.462 "params": { 00:26:34.462 "process_window_size_kb": 1024, 00:26:34.462 "process_max_bandwidth_mb_sec": 0 00:26:34.462 } 00:26:34.462 }, 00:26:34.462 { 00:26:34.462 "method": "bdev_iscsi_set_options", 00:26:34.462 "params": { 00:26:34.462 "timeout_sec": 30 00:26:34.462 } 00:26:34.462 }, 00:26:34.462 { 00:26:34.462 "method": "bdev_nvme_set_options", 00:26:34.462 "params": { 00:26:34.462 "action_on_timeout": "none", 00:26:34.462 "timeout_us": 0, 00:26:34.462 "timeout_admin_us": 0, 00:26:34.462 "keep_alive_timeout_ms": 10000, 00:26:34.462 "arbitration_burst": 0, 00:26:34.462 "low_priority_weight": 0, 00:26:34.462 "medium_priority_weight": 0, 00:26:34.462 "high_priority_weight": 0, 00:26:34.462 "nvme_adminq_poll_period_us": 10000, 00:26:34.462 "nvme_ioq_poll_period_us": 0, 00:26:34.462 "io_queue_requests": 0, 00:26:34.462 "delay_cmd_submit": true, 00:26:34.462 "transport_retry_count": 4, 00:26:34.462 "bdev_retry_count": 3, 00:26:34.462 "transport_ack_timeout": 0, 00:26:34.462 "ctrlr_loss_timeout_sec": 0, 00:26:34.462 "reconnect_delay_sec": 0, 00:26:34.462 "fast_io_fail_timeout_sec": 0, 00:26:34.462 "disable_auto_failback": false, 00:26:34.462 "generate_uuids": false, 00:26:34.462 "transport_tos": 0, 00:26:34.462 "nvme_error_stat": false, 00:26:34.462 "rdma_srq_size": 0, 00:26:34.462 "io_path_stat": false, 00:26:34.462 "allow_accel_sequence": false, 00:26:34.462 "rdma_max_cq_size": 0, 00:26:34.462 "rdma_cm_event_timeout_ms": 0, 00:26:34.462 "dhchap_digests": [ 00:26:34.462 "sha256", 00:26:34.462 "sha384", 00:26:34.462 "sha512" 00:26:34.462 ], 00:26:34.462 "dhchap_dhgroups": [ 00:26:34.462 "null", 00:26:34.462 "ffdhe2048", 00:26:34.462 "ffdhe3072", 00:26:34.462 "ffdhe4096", 00:26:34.462 "ffdhe6144", 00:26:34.462 "ffdhe8192" 00:26:34.462 ] 00:26:34.462 } 00:26:34.462 }, 00:26:34.462 { 00:26:34.462 "method": "bdev_nvme_set_hotplug", 00:26:34.462 "params": { 00:26:34.462 "period_us": 100000, 00:26:34.462 "enable": false 00:26:34.462 } 00:26:34.462 }, 00:26:34.462 { 00:26:34.462 "method": "bdev_malloc_create", 00:26:34.462 "params": { 00:26:34.462 "name": "malloc0", 00:26:34.462 "num_blocks": 8192, 00:26:34.462 "block_size": 4096, 00:26:34.462 "physical_block_size": 4096, 00:26:34.462 "uuid": "2d865d9c-bebe-46f7-a500-9cbd4a59036a", 00:26:34.462 "optimal_io_boundary": 0, 00:26:34.462 "md_size": 0, 00:26:34.462 "dif_type": 0, 00:26:34.462 "dif_is_head_of_md": false, 00:26:34.462 "dif_pi_format": 0 00:26:34.462 } 00:26:34.462 }, 00:26:34.462 { 00:26:34.462 "method": "bdev_wait_for_examine" 00:26:34.462 } 00:26:34.462 ] 00:26:34.462 }, 00:26:34.462 { 00:26:34.462 "subsystem": "nbd", 00:26:34.462 "config": [] 00:26:34.462 }, 00:26:34.462 { 00:26:34.462 "subsystem": "scheduler", 00:26:34.462 "config": [ 00:26:34.462 { 00:26:34.462 "method": "framework_set_scheduler", 00:26:34.462 "params": { 00:26:34.462 "name": "static" 00:26:34.462 } 00:26:34.462 } 00:26:34.462 ] 00:26:34.462 }, 00:26:34.462 { 00:26:34.462 "subsystem": "nvmf", 00:26:34.462 "config": [ 00:26:34.462 { 00:26:34.462 "method": "nvmf_set_config", 00:26:34.462 "params": { 00:26:34.462 "discovery_filter": "match_any", 00:26:34.462 "admin_cmd_passthru": { 00:26:34.462 "identify_ctrlr": false 00:26:34.462 }, 00:26:34.462 "dhchap_digests": [ 00:26:34.462 "sha256", 00:26:34.462 "sha384", 00:26:34.462 "sha512" 00:26:34.462 ], 00:26:34.462 "dhchap_dhgroups": [ 00:26:34.462 "null", 00:26:34.462 "ffdhe2048", 00:26:34.462 "ffdhe3072", 00:26:34.462 "ffdhe4096", 00:26:34.462 "ffdhe6144", 00:26:34.462 "ffdhe8192" 00:26:34.462 ] 00:26:34.462 } 00:26:34.462 }, 00:26:34.462 { 00:26:34.463 "method": "nvmf_set_max_subsystems", 00:26:34.463 "params": { 00:26:34.463 "max_subsystems": 1024 00:26:34.463 } 00:26:34.463 }, 00:26:34.463 { 00:26:34.463 "method": "nvmf_set_crdt", 00:26:34.463 "params": { 00:26:34.463 "crdt1": 0, 00:26:34.463 "crdt2": 0, 00:26:34.463 "crdt3": 0 00:26:34.463 } 00:26:34.463 }, 00:26:34.463 { 00:26:34.463 "method": "nvmf_create_transport", 00:26:34.463 "params": { 00:26:34.463 "trtype": "TCP", 00:26:34.463 "max_queue_depth": 128, 00:26:34.463 "max_io_qpairs_per_ctrlr": 127, 00:26:34.463 "in_capsule_data_size": 4096, 00:26:34.463 "max_io_size": 131072, 00:26:34.463 "io_unit_size": 131072, 00:26:34.463 "max_aq_depth": 128, 00:26:34.463 "num_shared_buffers": 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:34.463 511, 00:26:34.463 "buf_cache_size": 4294967295, 00:26:34.463 "dif_insert_or_strip": false, 00:26:34.463 "zcopy": false, 00:26:34.463 "c2h_success": false, 00:26:34.463 "sock_priority": 0, 00:26:34.463 "abort_timeout_sec": 1, 00:26:34.463 "ack_timeout": 0, 00:26:34.463 "data_wr_pool_size": 0 00:26:34.463 } 00:26:34.463 }, 00:26:34.463 { 00:26:34.463 "method": "nvmf_create_subsystem", 00:26:34.463 "params": { 00:26:34.463 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:34.463 "allow_any_host": false, 00:26:34.463 "serial_number": "00000000000000000000", 00:26:34.463 "model_number": "SPDK bdev Controller", 00:26:34.463 "max_namespaces": 32, 00:26:34.463 "min_cntlid": 1, 00:26:34.463 "max_cntlid": 65519, 00:26:34.463 "ana_reporting": false 00:26:34.463 } 00:26:34.463 }, 00:26:34.463 { 00:26:34.463 "method": "nvmf_subsystem_add_host", 00:26:34.463 "params": { 00:26:34.463 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:34.463 "host": "nqn.2016-06.io.spdk:host1", 00:26:34.463 "psk": "key0" 00:26:34.463 } 00:26:34.463 }, 00:26:34.463 { 00:26:34.463 "method": "nvmf_subsystem_add_ns", 00:26:34.463 "params": { 00:26:34.463 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:34.463 "namespace": { 00:26:34.463 "nsid": 1, 00:26:34.463 "bdev_name": "malloc0", 00:26:34.463 "nguid": "2D865D9CBEBE46F7A5009CBD4A59036A", 00:26:34.463 "uuid": "2d865d9c-bebe-46f7-a500-9cbd4a59036a", 00:26:34.463 "no_auto_visible": false 00:26:34.463 } 00:26:34.463 } 00:26:34.463 }, 00:26:34.463 { 00:26:34.463 "method": "nvmf_subsystem_add_listener", 00:26:34.463 "params": { 00:26:34.463 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:34.463 "listen_address": { 00:26:34.463 "trtype": "TCP", 00:26:34.463 "adrfam": "IPv4", 00:26:34.463 "traddr": "10.0.0.2", 00:26:34.463 "trsvcid": "4420" 00:26:34.463 }, 00:26:34.463 "secure_channel": false, 00:26:34.463 "sock_impl": "ssl" 00:26:34.463 } 00:26:34.463 } 00:26:34.463 ] 00:26:34.463 } 00:26:34.463 ] 00:26:34.463 }' 00:26:34.463 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=2727805 00:26:34.463 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 2727805 00:26:34.463 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:26:34.463 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2727805 ']' 00:26:34.463 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.463 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:34.463 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.463 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:34.463 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:34.463 [2024-11-20 17:53:34.343055] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:34.463 [2024-11-20 17:53:34.343113] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.724 [2024-11-20 17:53:34.425888] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.724 [2024-11-20 17:53:34.454609] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:34.724 [2024-11-20 17:53:34.454640] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:34.724 [2024-11-20 17:53:34.454646] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:34.724 [2024-11-20 17:53:34.454651] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:34.724 [2024-11-20 17:53:34.454655] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:34.724 [2024-11-20 17:53:34.454700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.985 [2024-11-20 17:53:34.657168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.985 [2024-11-20 17:53:34.689163] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:34.985 [2024-11-20 17:53:34.689359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.246 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:35.246 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:35.246 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:35.246 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:35.246 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:35.507 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.507 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2728146 00:26:35.507 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2728146 /var/tmp/bdevperf.sock 00:26:35.507 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2728146 ']' 00:26:35.507 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:35.507 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:35.507 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:35.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:35.507 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:26:35.507 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:35.507 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:35.507 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:26:35.507 "subsystems": [ 00:26:35.507 { 00:26:35.507 "subsystem": "keyring", 00:26:35.507 "config": [ 00:26:35.507 { 00:26:35.507 "method": "keyring_file_add_key", 00:26:35.507 "params": { 00:26:35.507 "name": "key0", 00:26:35.507 "path": "/tmp/tmp.nIKuk6pLG3" 00:26:35.507 } 00:26:35.507 } 00:26:35.507 ] 00:26:35.507 }, 00:26:35.507 { 00:26:35.507 "subsystem": "iobuf", 00:26:35.507 "config": [ 00:26:35.507 { 00:26:35.507 "method": "iobuf_set_options", 00:26:35.507 "params": { 00:26:35.507 "small_pool_count": 8192, 00:26:35.507 "large_pool_count": 1024, 00:26:35.507 "small_bufsize": 8192, 00:26:35.507 "large_bufsize": 135168 00:26:35.507 } 00:26:35.507 } 00:26:35.507 ] 00:26:35.507 }, 00:26:35.507 { 00:26:35.507 "subsystem": "sock", 00:26:35.507 "config": [ 00:26:35.507 { 00:26:35.507 "method": "sock_set_default_impl", 00:26:35.507 "params": { 00:26:35.507 "impl_name": "posix" 00:26:35.507 } 00:26:35.507 }, 00:26:35.507 { 00:26:35.507 "method": "sock_impl_set_options", 00:26:35.507 "params": { 00:26:35.507 "impl_name": "ssl", 00:26:35.507 "recv_buf_size": 4096, 00:26:35.507 "send_buf_size": 4096, 00:26:35.507 "enable_recv_pipe": true, 00:26:35.507 "enable_quickack": false, 00:26:35.507 "enable_placement_id": 0, 00:26:35.507 "enable_zerocopy_send_server": true, 00:26:35.507 "enable_zerocopy_send_client": false, 00:26:35.507 "zerocopy_threshold": 0, 00:26:35.507 "tls_version": 0, 00:26:35.507 "enable_ktls": false 00:26:35.507 } 00:26:35.507 }, 00:26:35.507 { 00:26:35.507 "method": "sock_impl_set_options", 00:26:35.507 "params": { 00:26:35.507 "impl_name": "posix", 00:26:35.507 "recv_buf_size": 2097152, 00:26:35.507 "send_buf_size": 2097152, 00:26:35.507 "enable_recv_pipe": true, 00:26:35.507 "enable_quickack": false, 00:26:35.507 "enable_placement_id": 0, 00:26:35.508 "enable_zerocopy_send_server": true, 00:26:35.508 "enable_zerocopy_send_client": false, 00:26:35.508 "zerocopy_threshold": 0, 00:26:35.508 "tls_version": 0, 00:26:35.508 "enable_ktls": false 00:26:35.508 } 00:26:35.508 } 00:26:35.508 ] 00:26:35.508 }, 00:26:35.508 { 00:26:35.508 "subsystem": "vmd", 00:26:35.508 "config": [] 00:26:35.508 }, 00:26:35.508 { 00:26:35.508 "subsystem": "accel", 00:26:35.508 "config": [ 00:26:35.508 { 00:26:35.508 "method": "accel_set_options", 00:26:35.508 "params": { 00:26:35.508 "small_cache_size": 128, 00:26:35.508 "large_cache_size": 16, 00:26:35.508 "task_count": 2048, 00:26:35.508 "sequence_count": 2048, 00:26:35.508 "buf_count": 2048 00:26:35.508 } 00:26:35.508 } 00:26:35.508 ] 00:26:35.508 }, 00:26:35.508 { 00:26:35.508 "subsystem": "bdev", 00:26:35.508 "config": [ 00:26:35.508 { 00:26:35.508 "method": "bdev_set_options", 00:26:35.508 "params": { 00:26:35.508 "bdev_io_pool_size": 65535, 00:26:35.508 "bdev_io_cache_size": 256, 00:26:35.508 "bdev_auto_examine": true, 00:26:35.508 "iobuf_small_cache_size": 128, 00:26:35.508 "iobuf_large_cache_size": 16 00:26:35.508 } 00:26:35.508 }, 00:26:35.508 { 00:26:35.508 "method": "bdev_raid_set_options", 00:26:35.508 "params": { 00:26:35.508 "process_window_size_kb": 1024, 00:26:35.508 "process_max_bandwidth_mb_sec": 0 00:26:35.508 } 00:26:35.508 }, 00:26:35.508 { 00:26:35.508 "method": "bdev_iscsi_set_options", 00:26:35.508 "params": { 00:26:35.508 "timeout_sec": 30 00:26:35.508 } 00:26:35.508 }, 00:26:35.508 { 00:26:35.508 "method": "bdev_nvme_set_options", 00:26:35.508 "params": { 00:26:35.508 "action_on_timeout": "none", 00:26:35.508 "timeout_us": 0, 00:26:35.508 "timeout_admin_us": 0, 00:26:35.508 "keep_alive_timeout_ms": 10000, 00:26:35.508 "arbitration_burst": 0, 00:26:35.508 "low_priority_weight": 0, 00:26:35.508 "medium_priority_weight": 0, 00:26:35.508 "high_priority_weight": 0, 00:26:35.508 "nvme_adminq_poll_period_us": 10000, 00:26:35.508 "nvme_ioq_poll_period_us": 0, 00:26:35.508 "io_queue_requests": 512, 00:26:35.508 "delay_cmd_submit": true, 00:26:35.508 "transport_retry_count": 4, 00:26:35.508 "bdev_retry_count": 3, 00:26:35.508 "transport_ack_timeout": 0, 00:26:35.508 "ctrlr_loss_timeout_sec": 0, 00:26:35.508 "reconnect_delay_sec": 0, 00:26:35.508 "fast_io_fail_timeout_sec": 0, 00:26:35.508 "disable_auto_failback": false, 00:26:35.508 "generate_uuids": false, 00:26:35.508 "transport_tos": 0, 00:26:35.508 "nvme_error_stat": false, 00:26:35.508 "rdma_srq_size": 0, 00:26:35.508 "io_path_stat": false, 00:26:35.508 "allow_accel_sequence": false, 00:26:35.508 "rdma_max_cq_size": 0, 00:26:35.508 "rdma_cm_event_timeout_ms": 0, 00:26:35.508 "dhchap_digests": [ 00:26:35.508 "sha256", 00:26:35.508 "sha384", 00:26:35.508 "sha512" 00:26:35.508 ], 00:26:35.508 "dhchap_dhgroups": [ 00:26:35.508 "null", 00:26:35.508 "ffdhe2048", 00:26:35.508 "ffdhe3072", 00:26:35.508 "ffdhe4096", 00:26:35.508 "ffdhe6144", 00:26:35.508 "ffdhe8192" 00:26:35.508 ] 00:26:35.508 } 00:26:35.508 }, 00:26:35.508 { 00:26:35.508 "method": "bdev_nvme_attach_controller", 00:26:35.508 "params": { 00:26:35.508 "name": "nvme0", 00:26:35.508 "trtype": "TCP", 00:26:35.508 "adrfam": "IPv4", 00:26:35.508 "traddr": "10.0.0.2", 00:26:35.508 "trsvcid": "4420", 00:26:35.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.508 "prchk_reftag": false, 00:26:35.508 "prchk_guard": false, 00:26:35.508 "ctrlr_loss_timeout_sec": 0, 00:26:35.508 "reconnect_delay_sec": 0, 00:26:35.508 "fast_io_fail_timeout_sec": 0, 00:26:35.508 "psk": "key0", 00:26:35.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:35.508 "hdgst": false, 00:26:35.508 "ddgst": false 00:26:35.508 } 00:26:35.508 }, 00:26:35.508 { 00:26:35.508 "method": "bdev_nvme_set_hotplug", 00:26:35.508 "params": { 00:26:35.508 "period_us": 100000, 00:26:35.508 "enable": false 00:26:35.508 } 00:26:35.508 }, 00:26:35.508 { 00:26:35.508 "method": "bdev_enable_histogram", 00:26:35.508 "params": { 00:26:35.508 "name": "nvme0n1", 00:26:35.508 "enable": true 00:26:35.508 } 00:26:35.508 }, 00:26:35.508 { 00:26:35.508 "method": "bdev_wait_for_examine" 00:26:35.508 } 00:26:35.508 ] 00:26:35.508 }, 00:26:35.508 { 00:26:35.508 "subsystem": "nbd", 00:26:35.508 "config": [] 00:26:35.508 } 00:26:35.508 ] 00:26:35.508 }' 00:26:35.508 [2024-11-20 17:53:35.210782] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:35.508 [2024-11-20 17:53:35.210836] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728146 ] 00:26:35.508 [2024-11-20 17:53:35.284305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.508 [2024-11-20 17:53:35.312756] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.769 [2024-11-20 17:53:35.441967] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:36.342 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:36.342 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:36.342 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:36.342 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:26:36.342 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.342 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:36.602 Running I/O for 1 seconds... 00:26:37.543 6029.00 IOPS, 23.55 MiB/s 00:26:37.543 Latency(us) 00:26:37.543 [2024-11-20T16:53:37.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.543 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:37.543 Verification LBA range: start 0x0 length 0x2000 00:26:37.543 nvme0n1 : 1.01 6088.41 23.78 0.00 0.00 20895.63 4696.75 22500.69 00:26:37.543 [2024-11-20T16:53:37.459Z] =================================================================================================================== 00:26:37.543 [2024-11-20T16:53:37.459Z] Total : 6088.41 23.78 0.00 0.00 20895.63 4696.75 22500.69 00:26:37.543 { 00:26:37.543 "results": [ 00:26:37.543 { 00:26:37.543 "job": "nvme0n1", 00:26:37.543 "core_mask": "0x2", 00:26:37.543 "workload": "verify", 00:26:37.543 "status": "finished", 00:26:37.543 "verify_range": { 00:26:37.543 "start": 0, 00:26:37.543 "length": 8192 00:26:37.543 }, 00:26:37.543 "queue_depth": 128, 00:26:37.543 "io_size": 4096, 00:26:37.543 "runtime": 1.011265, 00:26:37.543 "iops": 6088.414016108537, 00:26:37.543 "mibps": 23.782867250423973, 00:26:37.543 "io_failed": 0, 00:26:37.544 "io_timeout": 0, 00:26:37.544 "avg_latency_us": 20895.627247035896, 00:26:37.544 "min_latency_us": 4696.746666666667, 00:26:37.544 "max_latency_us": 22500.693333333333 00:26:37.544 } 00:26:37.544 ], 00:26:37.544 "core_count": 1 00:26:37.544 } 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:37.544 nvmf_trace.0 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2728146 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2728146 ']' 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2728146 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:37.544 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2728146 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2728146' 00:26:37.806 killing process with pid 2728146 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2728146 00:26:37.806 Received shutdown signal, test time was about 1.000000 seconds 00:26:37.806 00:26:37.806 Latency(us) 00:26:37.806 [2024-11-20T16:53:37.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.806 [2024-11-20T16:53:37.722Z] =================================================================================================================== 00:26:37.806 [2024-11-20T16:53:37.722Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2728146 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:37.806 rmmod nvme_tcp 00:26:37.806 rmmod nvme_fabrics 00:26:37.806 rmmod nvme_keyring 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 2727805 ']' 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 2727805 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2727805 ']' 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2727805 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:37.806 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2727805 00:26:38.067 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:38.067 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:38.067 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2727805' 00:26:38.067 killing process with pid 2727805 00:26:38.067 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2727805 00:26:38.067 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2727805 00:26:38.067 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:38.067 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:38.067 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:38.067 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:26:38.067 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:26:38.067 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:38.067 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:26:38.067 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:38.067 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:38.067 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.067 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.067 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.615 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:40.615 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.hUYf667lLY /tmp/tmp.uqIil78t7i /tmp/tmp.nIKuk6pLG3 00:26:40.615 00:26:40.615 real 1m25.010s 00:26:40.615 user 2m11.928s 00:26:40.615 sys 0m27.301s 00:26:40.615 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:40.615 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:40.615 ************************************ 00:26:40.615 END TEST nvmf_tls 00:26:40.615 ************************************ 00:26:40.615 17:53:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:40.615 17:53:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:40.615 17:53:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:40.615 17:53:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:40.615 ************************************ 00:26:40.615 START TEST nvmf_fips 00:26:40.615 ************************************ 00:26:40.615 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:40.615 * Looking for test storage... 00:26:40.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:40.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.615 --rc genhtml_branch_coverage=1 00:26:40.615 --rc genhtml_function_coverage=1 00:26:40.615 --rc genhtml_legend=1 00:26:40.615 --rc geninfo_all_blocks=1 00:26:40.615 --rc geninfo_unexecuted_blocks=1 00:26:40.615 00:26:40.615 ' 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:40.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.615 --rc genhtml_branch_coverage=1 00:26:40.615 --rc genhtml_function_coverage=1 00:26:40.615 --rc genhtml_legend=1 00:26:40.615 --rc geninfo_all_blocks=1 00:26:40.615 --rc geninfo_unexecuted_blocks=1 00:26:40.615 00:26:40.615 ' 00:26:40.615 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:40.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.615 --rc genhtml_branch_coverage=1 00:26:40.616 --rc genhtml_function_coverage=1 00:26:40.616 --rc genhtml_legend=1 00:26:40.616 --rc geninfo_all_blocks=1 00:26:40.616 --rc geninfo_unexecuted_blocks=1 00:26:40.616 00:26:40.616 ' 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:40.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.616 --rc genhtml_branch_coverage=1 00:26:40.616 --rc genhtml_function_coverage=1 00:26:40.616 --rc genhtml_legend=1 00:26:40.616 --rc geninfo_all_blocks=1 00:26:40.616 --rc geninfo_unexecuted_blocks=1 00:26:40.616 00:26:40.616 ' 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:40.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:40.616 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:26:40.617 Error setting digest 00:26:40.617 40825CF3207F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:26:40.617 40825CF3207F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:26:40.617 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:48.764 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:48.765 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:48.765 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:48.765 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:48.765 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # is_hw=yes 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:48.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:26:48.765 00:26:48.765 --- 10.0.0.2 ping statistics --- 00:26:48.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.765 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:26:48.765 00:26:48.765 --- 10.0.0.1 ping statistics --- 00:26:48.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.765 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # return 0 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=2732820 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 2732820 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2732820 ']' 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:48.765 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:48.765 [2024-11-20 17:53:47.946340] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:48.765 [2024-11-20 17:53:47.946413] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.765 [2024-11-20 17:53:48.036697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.765 [2024-11-20 17:53:48.082961] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.765 [2024-11-20 17:53:48.083015] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.765 [2024-11-20 17:53:48.083024] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.766 [2024-11-20 17:53:48.083032] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.766 [2024-11-20 17:53:48.083038] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.766 [2024-11-20 17:53:48.083064] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.027 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:49.027 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:26:49.027 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:49.027 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:49.027 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:49.027 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.027 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:26:49.027 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:49.027 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:26:49.027 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.jJv 00:26:49.027 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:49.027 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.jJv 00:26:49.027 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.jJv 00:26:49.027 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.jJv 00:26:49.027 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:49.288 [2024-11-20 17:53:48.982326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:49.288 [2024-11-20 17:53:48.998322] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:49.288 [2024-11-20 17:53:48.998671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:49.288 malloc0 00:26:49.288 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:49.288 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2733011 00:26:49.288 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2733011 /var/tmp/bdevperf.sock 00:26:49.288 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:49.288 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2733011 ']' 00:26:49.289 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:49.289 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:49.289 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:49.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:49.289 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:49.289 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:49.289 [2024-11-20 17:53:49.173082] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:49.289 [2024-11-20 17:53:49.173174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2733011 ] 00:26:49.549 [2024-11-20 17:53:49.256789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.549 [2024-11-20 17:53:49.305792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.120 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:50.120 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:26:50.120 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.jJv 00:26:50.381 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:50.642 [2024-11-20 17:53:50.345347] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:50.642 TLSTESTn1 00:26:50.642 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:50.642 Running I/O for 10 seconds... 00:26:52.970 3554.00 IOPS, 13.88 MiB/s [2024-11-20T16:53:53.826Z] 4515.00 IOPS, 17.64 MiB/s [2024-11-20T16:53:54.767Z] 4919.00 IOPS, 19.21 MiB/s [2024-11-20T16:53:55.709Z] 5036.75 IOPS, 19.67 MiB/s [2024-11-20T16:53:56.651Z] 5188.80 IOPS, 20.27 MiB/s [2024-11-20T16:53:57.594Z] 5408.17 IOPS, 21.13 MiB/s [2024-11-20T16:53:58.978Z] 5435.86 IOPS, 21.23 MiB/s [2024-11-20T16:53:59.644Z] 5291.00 IOPS, 20.67 MiB/s [2024-11-20T16:54:00.696Z] 5356.22 IOPS, 20.92 MiB/s [2024-11-20T16:54:00.696Z] 5432.80 IOPS, 21.22 MiB/s 00:27:00.780 Latency(us) 00:27:00.780 [2024-11-20T16:54:00.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.780 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:00.780 Verification LBA range: start 0x0 length 0x2000 00:27:00.780 TLSTESTn1 : 10.02 5436.42 21.24 0.00 0.00 23508.35 6280.53 33860.27 00:27:00.780 [2024-11-20T16:54:00.696Z] =================================================================================================================== 00:27:00.780 [2024-11-20T16:54:00.696Z] Total : 5436.42 21.24 0.00 0.00 23508.35 6280.53 33860.27 00:27:00.780 { 00:27:00.780 "results": [ 00:27:00.780 { 00:27:00.780 "job": "TLSTESTn1", 00:27:00.780 "core_mask": "0x4", 00:27:00.780 "workload": "verify", 00:27:00.780 "status": "finished", 00:27:00.780 "verify_range": { 00:27:00.780 "start": 0, 00:27:00.780 "length": 8192 00:27:00.780 }, 00:27:00.780 "queue_depth": 128, 00:27:00.780 "io_size": 4096, 00:27:00.780 "runtime": 10.016697, 00:27:00.780 "iops": 5436.422804842754, 00:27:00.780 "mibps": 21.236026581417008, 00:27:00.780 "io_failed": 0, 00:27:00.780 "io_timeout": 0, 00:27:00.780 "avg_latency_us": 23508.354714902212, 00:27:00.780 "min_latency_us": 6280.533333333334, 00:27:00.780 "max_latency_us": 33860.26666666667 00:27:00.780 } 00:27:00.780 ], 00:27:00.780 "core_count": 1 00:27:00.780 } 00:27:00.780 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:27:00.780 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:27:00.780 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:27:00.780 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:27:00.780 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:27:00.780 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:27:00.780 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:27:00.780 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:27:00.780 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:27:00.780 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:27:00.780 nvmf_trace.0 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2733011 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2733011 ']' 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2733011 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2733011 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2733011' 00:27:01.041 killing process with pid 2733011 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2733011 00:27:01.041 Received shutdown signal, test time was about 10.000000 seconds 00:27:01.041 00:27:01.041 Latency(us) 00:27:01.041 [2024-11-20T16:54:00.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.041 [2024-11-20T16:54:00.957Z] =================================================================================================================== 00:27:01.041 [2024-11-20T16:54:00.957Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2733011 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:01.041 rmmod nvme_tcp 00:27:01.041 rmmod nvme_fabrics 00:27:01.041 rmmod nvme_keyring 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:27:01.041 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:27:01.302 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 2732820 ']' 00:27:01.302 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 2732820 00:27:01.302 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2732820 ']' 00:27:01.302 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2732820 00:27:01.302 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:27:01.302 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:01.302 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2732820 00:27:01.302 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:01.302 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:01.302 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2732820' 00:27:01.302 killing process with pid 2732820 00:27:01.302 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2732820 00:27:01.302 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2732820 00:27:01.302 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:01.302 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:01.302 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:01.302 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:27:01.302 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:27:01.302 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:01.302 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:27:01.302 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:01.302 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:01.302 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.302 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.302 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.jJv 00:27:03.849 00:27:03.849 real 0m23.249s 00:27:03.849 user 0m25.007s 00:27:03.849 sys 0m9.668s 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:27:03.849 ************************************ 00:27:03.849 END TEST nvmf_fips 00:27:03.849 ************************************ 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:03.849 ************************************ 00:27:03.849 START TEST nvmf_control_msg_list 00:27:03.849 ************************************ 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:27:03.849 * Looking for test storage... 00:27:03.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:03.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.849 --rc genhtml_branch_coverage=1 00:27:03.849 --rc genhtml_function_coverage=1 00:27:03.849 --rc genhtml_legend=1 00:27:03.849 --rc geninfo_all_blocks=1 00:27:03.849 --rc geninfo_unexecuted_blocks=1 00:27:03.849 00:27:03.849 ' 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:03.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.849 --rc genhtml_branch_coverage=1 00:27:03.849 --rc genhtml_function_coverage=1 00:27:03.849 --rc genhtml_legend=1 00:27:03.849 --rc geninfo_all_blocks=1 00:27:03.849 --rc geninfo_unexecuted_blocks=1 00:27:03.849 00:27:03.849 ' 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:03.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.849 --rc genhtml_branch_coverage=1 00:27:03.849 --rc genhtml_function_coverage=1 00:27:03.849 --rc genhtml_legend=1 00:27:03.849 --rc geninfo_all_blocks=1 00:27:03.849 --rc geninfo_unexecuted_blocks=1 00:27:03.849 00:27:03.849 ' 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:03.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.849 --rc genhtml_branch_coverage=1 00:27:03.849 --rc genhtml_function_coverage=1 00:27:03.849 --rc genhtml_legend=1 00:27:03.849 --rc geninfo_all_blocks=1 00:27:03.849 --rc geninfo_unexecuted_blocks=1 00:27:03.849 00:27:03.849 ' 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.849 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:03.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:27:03.850 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:11.985 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:11.985 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.985 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:11.985 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:11.986 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # is_hw=yes 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:11.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:27:11.986 00:27:11.986 --- 10.0.0.2 ping statistics --- 00:27:11.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.986 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:11.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:27:11.986 00:27:11.986 --- 10.0.0.1 ping statistics --- 00:27:11.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.986 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:27:11.986 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # return 0 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=2740004 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 2740004 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 2740004 ']' 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:11.986 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:11.986 [2024-11-20 17:54:11.121980] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:11.986 [2024-11-20 17:54:11.122052] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:11.986 [2024-11-20 17:54:11.212293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.986 [2024-11-20 17:54:11.258433] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:11.986 [2024-11-20 17:54:11.258485] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:11.986 [2024-11-20 17:54:11.258494] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:11.986 [2024-11-20 17:54:11.258501] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:11.986 [2024-11-20 17:54:11.258508] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:11.986 [2024-11-20 17:54:11.258537] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:12.248 [2024-11-20 17:54:11.978230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.248 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:12.248 Malloc0 00:27:12.248 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.248 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:27:12.248 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.248 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:12.248 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.248 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:12.248 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.248 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:12.248 [2024-11-20 17:54:12.047939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.248 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.248 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2740142 00:27:12.248 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:12.248 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2740144 00:27:12.248 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:12.248 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2740145 00:27:12.248 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2740142 00:27:12.248 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:12.248 [2024-11-20 17:54:12.138963] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:12.248 [2024-11-20 17:54:12.139246] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:12.248 [2024-11-20 17:54:12.139580] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:13.635 Initializing NVMe Controllers 00:27:13.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:27:13.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:27:13.635 Initialization complete. Launching workers. 00:27:13.635 ======================================================== 00:27:13.635 Latency(us) 00:27:13.635 Device Information : IOPS MiB/s Average min max 00:27:13.635 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1522.00 5.95 657.09 253.42 1030.01 00:27:13.635 ======================================================== 00:27:13.635 Total : 1522.00 5.95 657.09 253.42 1030.01 00:27:13.635 00:27:13.635 Initializing NVMe Controllers 00:27:13.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:27:13.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:27:13.635 Initialization complete. Launching workers. 00:27:13.635 ======================================================== 00:27:13.635 Latency(us) 00:27:13.635 Device Information : IOPS MiB/s Average min max 00:27:13.635 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40891.47 40582.90 41112.71 00:27:13.635 ======================================================== 00:27:13.635 Total : 25.00 0.10 40891.47 40582.90 41112.71 00:27:13.635 00:27:13.635 Initializing NVMe Controllers 00:27:13.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:27:13.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:27:13.635 Initialization complete. Launching workers. 00:27:13.635 ======================================================== 00:27:13.635 Latency(us) 00:27:13.635 Device Information : IOPS MiB/s Average min max 00:27:13.635 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40915.88 40789.11 41242.56 00:27:13.635 ======================================================== 00:27:13.635 Total : 25.00 0.10 40915.88 40789.11 41242.56 00:27:13.635 00:27:13.635 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2740144 00:27:13.635 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2740145 00:27:13.635 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:13.635 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:27:13.635 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:13.635 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:13.636 rmmod nvme_tcp 00:27:13.636 rmmod nvme_fabrics 00:27:13.636 rmmod nvme_keyring 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 2740004 ']' 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 2740004 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 2740004 ']' 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 2740004 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2740004 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2740004' 00:27:13.636 killing process with pid 2740004 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 2740004 00:27:13.636 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 2740004 00:27:13.897 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:13.897 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:13.897 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:13.897 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:27:13.897 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:27:13.897 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:13.897 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:27:13.897 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:13.897 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:13.897 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.897 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.897 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:16.441 00:27:16.441 real 0m12.452s 00:27:16.441 user 0m7.924s 00:27:16.441 sys 0m6.610s 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:16.441 ************************************ 00:27:16.441 END TEST nvmf_control_msg_list 00:27:16.441 ************************************ 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:16.441 ************************************ 00:27:16.441 START TEST nvmf_wait_for_buf 00:27:16.441 ************************************ 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:27:16.441 * Looking for test storage... 00:27:16.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:16.441 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:16.442 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:27:16.442 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:16.442 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:16.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.442 --rc genhtml_branch_coverage=1 00:27:16.442 --rc genhtml_function_coverage=1 00:27:16.442 --rc genhtml_legend=1 00:27:16.442 --rc geninfo_all_blocks=1 00:27:16.442 --rc geninfo_unexecuted_blocks=1 00:27:16.442 00:27:16.442 ' 00:27:16.442 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:16.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.442 --rc genhtml_branch_coverage=1 00:27:16.442 --rc genhtml_function_coverage=1 00:27:16.442 --rc genhtml_legend=1 00:27:16.442 --rc geninfo_all_blocks=1 00:27:16.442 --rc geninfo_unexecuted_blocks=1 00:27:16.442 00:27:16.442 ' 00:27:16.442 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:16.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.442 --rc genhtml_branch_coverage=1 00:27:16.442 --rc genhtml_function_coverage=1 00:27:16.442 --rc genhtml_legend=1 00:27:16.442 --rc geninfo_all_blocks=1 00:27:16.442 --rc geninfo_unexecuted_blocks=1 00:27:16.442 00:27:16.442 ' 00:27:16.442 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:16.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.442 --rc genhtml_branch_coverage=1 00:27:16.442 --rc genhtml_function_coverage=1 00:27:16.442 --rc genhtml_legend=1 00:27:16.442 --rc geninfo_all_blocks=1 00:27:16.442 --rc geninfo_unexecuted_blocks=1 00:27:16.442 00:27:16.442 ' 00:27:16.442 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.442 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:27:16.442 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.442 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.442 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.442 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.442 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.442 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.442 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.442 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.442 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:16.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:16.442 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:24.577 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:24.577 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:24.577 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:24.578 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:24.578 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # is_hw=yes 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:24.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:27:24.578 00:27:24.578 --- 10.0.0.2 ping statistics --- 00:27:24.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.578 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:27:24.578 00:27:24.578 --- 10.0.0.1 ping statistics --- 00:27:24.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.578 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # return 0 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=2744677 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 2744677 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 2744677 ']' 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:24.578 17:54:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:24.578 [2024-11-20 17:54:23.578989] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:24.578 [2024-11-20 17:54:23.579060] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.578 [2024-11-20 17:54:23.668136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.578 [2024-11-20 17:54:23.713395] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.578 [2024-11-20 17:54:23.713448] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.578 [2024-11-20 17:54:23.713456] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.578 [2024-11-20 17:54:23.713464] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.578 [2024-11-20 17:54:23.713470] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.578 [2024-11-20 17:54:23.713494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.578 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:24.578 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:27:24.578 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:24.578 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:24.578 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:24.578 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.578 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:27:24.578 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:24.578 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:27:24.578 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.579 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:24.579 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.579 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:27:24.579 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.579 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:24.579 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.579 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:27:24.579 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.579 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:24.840 Malloc0 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:24.840 [2024-11-20 17:54:24.564914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:24.840 [2024-11-20 17:54:24.601262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.840 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:24.840 [2024-11-20 17:54:24.684290] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:26.224 Initializing NVMe Controllers 00:27:26.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:27:26.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:27:26.224 Initialization complete. Launching workers. 00:27:26.224 ======================================================== 00:27:26.224 Latency(us) 00:27:26.224 Device Information : IOPS MiB/s Average min max 00:27:26.224 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32294.33 7999.29 63853.61 00:27:26.224 ======================================================== 00:27:26.224 Total : 129.00 16.12 32294.33 7999.29 63853.61 00:27:26.224 00:27:26.224 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:27:26.224 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:27:26.224 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.224 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:26.224 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.224 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:27:26.224 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:27:26.225 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:26.225 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:27:26.225 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:26.225 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:27:26.225 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:26.225 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:27:26.225 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:26.225 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:26.225 rmmod nvme_tcp 00:27:26.486 rmmod nvme_fabrics 00:27:26.486 rmmod nvme_keyring 00:27:26.486 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:26.486 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:27:26.486 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:27:26.486 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 2744677 ']' 00:27:26.486 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 2744677 00:27:26.486 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 2744677 ']' 00:27:26.486 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 2744677 00:27:26.486 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:27:26.486 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:26.486 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2744677 00:27:26.486 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:26.486 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:26.486 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2744677' 00:27:26.486 killing process with pid 2744677 00:27:26.486 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 2744677 00:27:26.486 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 2744677 00:27:26.746 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:26.746 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:26.746 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:26.746 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:27:26.746 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:27:26.746 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:26.746 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:27:26.746 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:26.746 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:26.746 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.746 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.746 17:54:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.657 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:28.657 00:27:28.657 real 0m12.729s 00:27:28.657 user 0m5.179s 00:27:28.657 sys 0m6.160s 00:27:28.657 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:28.657 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:28.657 ************************************ 00:27:28.657 END TEST nvmf_wait_for_buf 00:27:28.657 ************************************ 00:27:28.657 17:54:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:27:28.657 17:54:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:27:28.657 17:54:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:28.657 17:54:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:28.657 17:54:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:28.657 ************************************ 00:27:28.657 START TEST nvmf_fuzz 00:27:28.657 ************************************ 00:27:28.657 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:27:28.918 * Looking for test storage... 00:27:28.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:28.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.918 --rc genhtml_branch_coverage=1 00:27:28.918 --rc genhtml_function_coverage=1 00:27:28.918 --rc genhtml_legend=1 00:27:28.918 --rc geninfo_all_blocks=1 00:27:28.918 --rc geninfo_unexecuted_blocks=1 00:27:28.918 00:27:28.918 ' 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:28.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.918 --rc genhtml_branch_coverage=1 00:27:28.918 --rc genhtml_function_coverage=1 00:27:28.918 --rc genhtml_legend=1 00:27:28.918 --rc geninfo_all_blocks=1 00:27:28.918 --rc geninfo_unexecuted_blocks=1 00:27:28.918 00:27:28.918 ' 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:28.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.918 --rc genhtml_branch_coverage=1 00:27:28.918 --rc genhtml_function_coverage=1 00:27:28.918 --rc genhtml_legend=1 00:27:28.918 --rc geninfo_all_blocks=1 00:27:28.918 --rc geninfo_unexecuted_blocks=1 00:27:28.918 00:27:28.918 ' 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:28.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.918 --rc genhtml_branch_coverage=1 00:27:28.918 --rc genhtml_function_coverage=1 00:27:28.918 --rc genhtml_legend=1 00:27:28.918 --rc geninfo_all_blocks=1 00:27:28.918 --rc geninfo_unexecuted_blocks=1 00:27:28.918 00:27:28.918 ' 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.918 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:28.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:27:28.919 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:37.078 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:37.078 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:37.079 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:37.079 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:37.079 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # is_hw=yes 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:37.079 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:37.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:27:37.079 00:27:37.079 --- 10.0.0.2 ping statistics --- 00:27:37.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.079 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:37.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:27:37.079 00:27:37.079 --- 10.0.0.1 ping statistics --- 00:27:37.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.079 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # return 0 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2749323 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2749323 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2749323 ']' 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:37.079 Malloc0 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.079 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:27:37.080 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:28:09.210 Fuzzing completed. Shutting down the fuzz application 00:28:09.210 00:28:09.210 Dumping successful admin opcodes: 00:28:09.210 8, 9, 10, 24, 00:28:09.210 Dumping successful io opcodes: 00:28:09.210 0, 9, 00:28:09.210 NS: 0x200003aeff00 I/O qp, Total commands completed: 1168734, total successful commands: 6879, random_seed: 1112124864 00:28:09.210 NS: 0x200003aeff00 admin qp, Total commands completed: 150267, total successful commands: 1205, random_seed: 116570240 00:28:09.210 17:55:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:28:09.210 Fuzzing completed. Shutting down the fuzz application 00:28:09.210 00:28:09.210 Dumping successful admin opcodes: 00:28:09.210 24, 00:28:09.210 Dumping successful io opcodes: 00:28:09.210 00:28:09.210 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 980447143 00:28:09.210 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 980522445 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:09.210 rmmod nvme_tcp 00:28:09.210 rmmod nvme_fabrics 00:28:09.210 rmmod nvme_keyring 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 2749323 ']' 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 2749323 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2749323 ']' 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 2749323 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2749323 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2749323' 00:28:09.210 killing process with pid 2749323 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 2749323 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 2749323 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.210 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.603 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:10.603 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:28:10.864 00:28:10.864 real 0m41.985s 00:28:10.864 user 0m55.490s 00:28:10.864 sys 0m15.586s 00:28:10.864 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:10.864 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:10.864 ************************************ 00:28:10.864 END TEST nvmf_fuzz 00:28:10.864 ************************************ 00:28:10.864 17:55:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:28:10.864 17:55:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:10.864 17:55:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:10.864 17:55:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:10.864 ************************************ 00:28:10.864 START TEST nvmf_multiconnection 00:28:10.864 ************************************ 00:28:10.864 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:28:10.864 * Looking for test storage... 00:28:10.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:10.864 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:10.864 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:28:10.864 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:11.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.127 --rc genhtml_branch_coverage=1 00:28:11.127 --rc genhtml_function_coverage=1 00:28:11.127 --rc genhtml_legend=1 00:28:11.127 --rc geninfo_all_blocks=1 00:28:11.127 --rc geninfo_unexecuted_blocks=1 00:28:11.127 00:28:11.127 ' 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:11.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.127 --rc genhtml_branch_coverage=1 00:28:11.127 --rc genhtml_function_coverage=1 00:28:11.127 --rc genhtml_legend=1 00:28:11.127 --rc geninfo_all_blocks=1 00:28:11.127 --rc geninfo_unexecuted_blocks=1 00:28:11.127 00:28:11.127 ' 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:11.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.127 --rc genhtml_branch_coverage=1 00:28:11.127 --rc genhtml_function_coverage=1 00:28:11.127 --rc genhtml_legend=1 00:28:11.127 --rc geninfo_all_blocks=1 00:28:11.127 --rc geninfo_unexecuted_blocks=1 00:28:11.127 00:28:11.127 ' 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:11.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.127 --rc genhtml_branch_coverage=1 00:28:11.127 --rc genhtml_function_coverage=1 00:28:11.127 --rc genhtml_legend=1 00:28:11.127 --rc geninfo_all_blocks=1 00:28:11.127 --rc geninfo_unexecuted_blocks=1 00:28:11.127 00:28:11.127 ' 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.127 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:11.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:28:11.128 17:55:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:19.268 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:19.268 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.268 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:19.269 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:19.269 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # is_hw=yes 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:19.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:28:19.269 00:28:19.269 --- 10.0.0.2 ping statistics --- 00:28:19.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.269 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:28:19.269 00:28:19.269 --- 10.0.0.1 ping statistics --- 00:28:19.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.269 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # return 0 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.269 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=2759483 00:28:19.270 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 2759483 00:28:19.270 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:19.270 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 2759483 ']' 00:28:19.270 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.270 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:19.270 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.270 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:19.270 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.270 [2024-11-20 17:55:18.460540] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:19.270 [2024-11-20 17:55:18.460608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.270 [2024-11-20 17:55:18.535757] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:19.270 [2024-11-20 17:55:18.586012] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.270 [2024-11-20 17:55:18.586065] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.270 [2024-11-20 17:55:18.586076] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.270 [2024-11-20 17:55:18.586085] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.270 [2024-11-20 17:55:18.586091] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.270 [2024-11-20 17:55:18.586208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.270 [2024-11-20 17:55:18.586308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.270 [2024-11-20 17:55:18.586845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.270 [2024-11-20 17:55:18.586849] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.530 [2024-11-20 17:55:19.342574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.530 Malloc1 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.530 [2024-11-20 17:55:19.416140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.530 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.791 Malloc2 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.791 Malloc3 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.791 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.792 Malloc4 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.792 Malloc5 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.792 Malloc6 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.792 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.053 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.053 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:28:20.053 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.053 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.053 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.054 Malloc7 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.054 Malloc8 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.054 Malloc9 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.054 Malloc10 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.054 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.316 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.316 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:28:20.316 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.316 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.316 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.316 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:20.316 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:28:20.316 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.316 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.316 Malloc11 00:28:20.316 17:55:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.316 17:55:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:28:20.316 17:55:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.316 17:55:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.316 17:55:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.316 17:55:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:28:20.316 17:55:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.316 17:55:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.316 17:55:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.316 17:55:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:28:20.316 17:55:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.316 17:55:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:20.316 17:55:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.316 17:55:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:28:20.316 17:55:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:20.316 17:55:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:21.702 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:28:21.702 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:21.702 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:21.702 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:21.702 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:24.245 17:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:24.245 17:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:24.245 17:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:28:24.245 17:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:24.245 17:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:24.245 17:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:24.245 17:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:24.245 17:55:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:28:25.632 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:28:25.632 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:25.632 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:25.632 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:25.632 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:27.544 17:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:27.544 17:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:27.544 17:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:28:27.544 17:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:27.544 17:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:27.544 17:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:27.544 17:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:27.544 17:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:28:28.926 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:28:28.926 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:28.926 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:28.926 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:28.926 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:31.571 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:31.571 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:31.571 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:28:31.571 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:31.571 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:31.571 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:31.571 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:31.571 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:28:32.954 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:28:32.954 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:32.954 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:32.954 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:32.954 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:34.866 17:55:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:34.867 17:55:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:34.867 17:55:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:28:34.867 17:55:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:34.867 17:55:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:34.867 17:55:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:34.867 17:55:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:34.867 17:55:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:28:36.779 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:28:36.779 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:36.779 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:36.779 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:36.779 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:38.692 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:38.692 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:38.692 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:28:38.692 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:38.692 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:38.692 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:38.692 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:38.692 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:28:40.075 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:28:40.075 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:40.075 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:40.075 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:40.075 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:42.620 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:42.620 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:42.620 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:28:42.620 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:42.620 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:42.620 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:42.620 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:42.620 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:28:44.004 17:55:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:28:44.004 17:55:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:44.005 17:55:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:44.005 17:55:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:44.005 17:55:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:45.917 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:45.917 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:45.917 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:28:45.917 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:45.917 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:45.917 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:45.917 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:45.917 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:28:47.827 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:28:47.827 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:47.827 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:47.827 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:47.828 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:49.739 17:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:49.739 17:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:49.739 17:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:28:49.739 17:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:49.739 17:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:49.739 17:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:49.739 17:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:49.740 17:55:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:28:51.653 17:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:28:51.653 17:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:51.653 17:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:51.653 17:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:51.653 17:55:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:53.562 17:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:53.562 17:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:53.562 17:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:28:53.562 17:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:53.562 17:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:53.562 17:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:53.562 17:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:53.562 17:55:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:28:55.475 17:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:28:55.475 17:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:55.475 17:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:55.475 17:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:55.475 17:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:57.391 17:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:57.391 17:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:57.391 17:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:28:57.391 17:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:57.391 17:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:57.392 17:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:57.392 17:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:57.392 17:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:28:59.952 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:28:59.952 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:59.952 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:59.952 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:59.952 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:29:01.338 17:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:29:01.338 17:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:29:01.338 17:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:29:01.598 17:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:29:01.598 17:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:29:01.598 17:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:29:01.598 17:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:29:01.598 [global] 00:29:01.598 thread=1 00:29:01.598 invalidate=1 00:29:01.598 rw=read 00:29:01.598 time_based=1 00:29:01.598 runtime=10 00:29:01.598 ioengine=libaio 00:29:01.598 direct=1 00:29:01.598 bs=262144 00:29:01.598 iodepth=64 00:29:01.598 norandommap=1 00:29:01.598 numjobs=1 00:29:01.598 00:29:01.598 [job0] 00:29:01.598 filename=/dev/nvme0n1 00:29:01.598 [job1] 00:29:01.598 filename=/dev/nvme10n1 00:29:01.598 [job2] 00:29:01.598 filename=/dev/nvme1n1 00:29:01.598 [job3] 00:29:01.598 filename=/dev/nvme2n1 00:29:01.598 [job4] 00:29:01.598 filename=/dev/nvme3n1 00:29:01.598 [job5] 00:29:01.598 filename=/dev/nvme4n1 00:29:01.598 [job6] 00:29:01.598 filename=/dev/nvme5n1 00:29:01.598 [job7] 00:29:01.598 filename=/dev/nvme6n1 00:29:01.598 [job8] 00:29:01.598 filename=/dev/nvme7n1 00:29:01.598 [job9] 00:29:01.598 filename=/dev/nvme8n1 00:29:01.598 [job10] 00:29:01.598 filename=/dev/nvme9n1 00:29:01.599 Could not set queue depth (nvme0n1) 00:29:01.599 Could not set queue depth (nvme10n1) 00:29:01.599 Could not set queue depth (nvme1n1) 00:29:01.599 Could not set queue depth (nvme2n1) 00:29:01.599 Could not set queue depth (nvme3n1) 00:29:01.599 Could not set queue depth (nvme4n1) 00:29:01.599 Could not set queue depth (nvme5n1) 00:29:01.599 Could not set queue depth (nvme6n1) 00:29:01.599 Could not set queue depth (nvme7n1) 00:29:01.599 Could not set queue depth (nvme8n1) 00:29:01.599 Could not set queue depth (nvme9n1) 00:29:02.184 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:02.184 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:02.184 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:02.184 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:02.184 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:02.184 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:02.184 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:02.184 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:02.184 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:02.184 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:02.184 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:02.184 fio-3.35 00:29:02.184 Starting 11 threads 00:29:14.417 00:29:14.417 job0: (groupid=0, jobs=1): err= 0: pid=2767952: Wed Nov 20 17:56:12 2024 00:29:14.417 read: IOPS=200, BW=50.0MiB/s (52.4MB/s)(506MiB/10121msec) 00:29:14.417 slat (usec): min=6, max=305052, avg=3775.26, stdev=17701.01 00:29:14.417 clat (msec): min=9, max=971, avg=315.54, stdev=216.49 00:29:14.417 lat (msec): min=9, max=971, avg=319.32, stdev=219.05 00:29:14.417 clat percentiles (msec): 00:29:14.417 | 1.00th=[ 23], 5.00th=[ 30], 10.00th=[ 45], 20.00th=[ 93], 00:29:14.417 | 30.00th=[ 140], 40.00th=[ 230], 50.00th=[ 300], 60.00th=[ 388], 00:29:14.417 | 70.00th=[ 443], 80.00th=[ 502], 90.00th=[ 600], 95.00th=[ 718], 00:29:14.417 | 99.00th=[ 869], 99.50th=[ 894], 99.90th=[ 969], 99.95th=[ 969], 00:29:14.417 | 99.99th=[ 969] 00:29:14.417 bw ( KiB/s): min= 7680, max=215552, per=5.71%, avg=50227.20, stdev=45051.07, samples=20 00:29:14.417 iops : min= 30, max= 842, avg=196.20, stdev=175.98, samples=20 00:29:14.417 lat (msec) : 10=0.10%, 20=0.40%, 50=10.57%, 100=11.70%, 250=19.60% 00:29:14.417 lat (msec) : 500=37.23%, 750=18.02%, 1000=2.37% 00:29:14.417 cpu : usr=0.09%, sys=0.72%, ctx=430, majf=0, minf=3535 00:29:14.417 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:29:14.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.417 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:14.417 issued rwts: total=2025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.417 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:14.417 job1: (groupid=0, jobs=1): err= 0: pid=2767974: Wed Nov 20 17:56:12 2024 00:29:14.417 read: IOPS=322, BW=80.5MiB/s (84.4MB/s)(814MiB/10110msec) 00:29:14.417 slat (usec): min=12, max=130020, avg=2525.77, stdev=10382.51 00:29:14.417 clat (msec): min=12, max=606, avg=196.04, stdev=130.72 00:29:14.417 lat (msec): min=12, max=606, avg=198.57, stdev=132.36 00:29:14.417 clat percentiles (msec): 00:29:14.417 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 79], 00:29:14.417 | 30.00th=[ 118], 40.00th=[ 136], 50.00th=[ 159], 60.00th=[ 197], 00:29:14.417 | 70.00th=[ 259], 80.00th=[ 326], 90.00th=[ 384], 95.00th=[ 447], 00:29:14.417 | 99.00th=[ 531], 99.50th=[ 567], 99.90th=[ 609], 99.95th=[ 609], 00:29:14.417 | 99.99th=[ 609] 00:29:14.417 bw ( KiB/s): min=36864, max=285696, per=9.29%, avg=81715.20, stdev=58769.98, samples=20 00:29:14.417 iops : min= 144, max= 1116, avg=319.20, stdev=229.57, samples=20 00:29:14.417 lat (msec) : 20=0.09%, 50=16.22%, 100=7.37%, 250=44.23%, 500=30.28% 00:29:14.417 lat (msec) : 750=1.81% 00:29:14.417 cpu : usr=0.09%, sys=1.13%, ctx=598, majf=0, minf=4097 00:29:14.417 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:29:14.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:14.417 issued rwts: total=3256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.417 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:14.417 job2: (groupid=0, jobs=1): err= 0: pid=2767994: Wed Nov 20 17:56:12 2024 00:29:14.417 read: IOPS=474, BW=119MiB/s (124MB/s)(1194MiB/10063msec) 00:29:14.417 slat (usec): min=10, max=97274, avg=2087.50, stdev=7458.62 00:29:14.417 clat (msec): min=17, max=405, avg=132.60, stdev=104.04 00:29:14.417 lat (msec): min=17, max=405, avg=134.69, stdev=105.60 00:29:14.417 clat percentiles (msec): 00:29:14.417 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 38], 00:29:14.417 | 30.00th=[ 42], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 159], 00:29:14.417 | 70.00th=[ 205], 80.00th=[ 249], 90.00th=[ 292], 95.00th=[ 313], 00:29:14.417 | 99.00th=[ 368], 99.50th=[ 372], 99.90th=[ 405], 99.95th=[ 405], 00:29:14.417 | 99.99th=[ 405] 00:29:14.417 bw ( KiB/s): min=48640, max=428544, per=13.71%, avg=120627.20, stdev=114484.92, samples=20 00:29:14.417 iops : min= 190, max= 1674, avg=471.20, stdev=447.21, samples=20 00:29:14.417 lat (msec) : 20=0.15%, 50=36.00%, 100=19.85%, 250=24.13%, 500=19.87% 00:29:14.417 cpu : usr=0.14%, sys=1.82%, ctx=854, majf=0, minf=4097 00:29:14.417 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:29:14.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:14.417 issued rwts: total=4775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.417 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:14.417 job3: (groupid=0, jobs=1): err= 0: pid=2768006: Wed Nov 20 17:56:12 2024 00:29:14.417 read: IOPS=204, BW=51.0MiB/s (53.5MB/s)(516MiB/10120msec) 00:29:14.417 slat (usec): min=12, max=356266, avg=4235.18, stdev=18564.40 00:29:14.417 clat (usec): min=1334, max=903683, avg=308840.98, stdev=198408.01 00:29:14.417 lat (usec): min=1384, max=903757, avg=313076.16, stdev=200860.75 00:29:14.417 clat percentiles (usec): 00:29:14.417 | 1.00th=[ 1811], 5.00th=[ 3458], 10.00th=[ 16188], 20.00th=[106431], 00:29:14.417 | 30.00th=[170918], 40.00th=[274727], 50.00th=[329253], 60.00th=[371196], 00:29:14.417 | 70.00th=[408945], 80.00th=[459277], 90.00th=[541066], 95.00th=[700449], 00:29:14.417 | 99.00th=[784335], 99.50th=[809501], 99.90th=[834667], 99.95th=[834667], 00:29:14.417 | 99.99th=[901776] 00:29:14.417 bw ( KiB/s): min=14336, max=143360, per=5.82%, avg=51229.45, stdev=30895.50, samples=20 00:29:14.417 iops : min= 56, max= 560, avg=200.10, stdev=120.69, samples=20 00:29:14.417 lat (msec) : 2=1.55%, 4=4.26%, 10=0.73%, 20=6.30%, 50=0.44% 00:29:14.417 lat (msec) : 100=4.94%, 250=18.31%, 500=48.23%, 750=12.69%, 1000=2.57% 00:29:14.417 cpu : usr=0.05%, sys=0.80%, ctx=517, majf=0, minf=4097 00:29:14.417 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=96.9% 00:29:14.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:14.417 issued rwts: total=2065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.417 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:14.417 job4: (groupid=0, jobs=1): err= 0: pid=2768013: Wed Nov 20 17:56:12 2024 00:29:14.417 read: IOPS=206, BW=51.7MiB/s (54.2MB/s)(523MiB/10115msec) 00:29:14.417 slat (usec): min=12, max=200097, avg=4435.99, stdev=15671.48 00:29:14.417 clat (msec): min=18, max=866, avg=304.28, stdev=165.96 00:29:14.417 lat (msec): min=20, max=866, avg=308.71, stdev=168.23 00:29:14.417 clat percentiles (msec): 00:29:14.417 | 1.00th=[ 28], 5.00th=[ 84], 10.00th=[ 96], 20.00th=[ 129], 00:29:14.417 | 30.00th=[ 213], 40.00th=[ 249], 50.00th=[ 284], 60.00th=[ 342], 00:29:14.417 | 70.00th=[ 376], 80.00th=[ 443], 90.00th=[ 523], 95.00th=[ 609], 00:29:14.417 | 99.00th=[ 751], 99.50th=[ 852], 99.90th=[ 852], 99.95th=[ 869], 00:29:14.417 | 99.99th=[ 869] 00:29:14.417 bw ( KiB/s): min=16896, max=120832, per=5.91%, avg=51968.00, stdev=25497.12, samples=20 00:29:14.417 iops : min= 66, max= 472, avg=203.00, stdev=99.60, samples=20 00:29:14.417 lat (msec) : 20=0.05%, 50=1.91%, 100=9.94%, 250=28.19%, 500=47.44% 00:29:14.417 lat (msec) : 750=11.23%, 1000=1.24% 00:29:14.417 cpu : usr=0.08%, sys=0.92%, ctx=365, majf=0, minf=4097 00:29:14.417 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:29:14.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:14.417 issued rwts: total=2093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.417 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:14.417 job5: (groupid=0, jobs=1): err= 0: pid=2768038: Wed Nov 20 17:56:12 2024 00:29:14.417 read: IOPS=393, BW=98.3MiB/s (103MB/s)(995MiB/10115msec) 00:29:14.417 slat (usec): min=11, max=332886, avg=2017.59, stdev=12302.48 00:29:14.417 clat (msec): min=11, max=814, avg=160.52, stdev=184.79 00:29:14.417 lat (msec): min=12, max=887, avg=162.54, stdev=186.78 00:29:14.417 clat percentiles (msec): 00:29:14.417 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 39], 00:29:14.417 | 30.00th=[ 42], 40.00th=[ 44], 50.00th=[ 48], 60.00th=[ 91], 00:29:14.417 | 70.00th=[ 186], 80.00th=[ 296], 90.00th=[ 468], 95.00th=[ 584], 00:29:14.417 | 99.00th=[ 726], 99.50th=[ 760], 99.90th=[ 818], 99.95th=[ 818], 00:29:14.417 | 99.99th=[ 818] 00:29:14.417 bw ( KiB/s): min=13824, max=406528, per=11.39%, avg=100235.70, stdev=117715.72, samples=20 00:29:14.417 iops : min= 54, max= 1588, avg=391.50, stdev=459.82, samples=20 00:29:14.417 lat (msec) : 20=0.53%, 50=54.63%, 100=5.20%, 250=14.93%, 500=16.39% 00:29:14.417 lat (msec) : 750=7.49%, 1000=0.83% 00:29:14.417 cpu : usr=0.09%, sys=1.25%, ctx=544, majf=0, minf=4097 00:29:14.417 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:29:14.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:14.417 issued rwts: total=3978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.417 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:14.417 job6: (groupid=0, jobs=1): err= 0: pid=2768044: Wed Nov 20 17:56:12 2024 00:29:14.417 read: IOPS=271, BW=67.8MiB/s (71.1MB/s)(686MiB/10113msec) 00:29:14.417 slat (usec): min=12, max=293904, avg=2624.28, stdev=11738.32 00:29:14.417 clat (usec): min=1653, max=812006, avg=232851.89, stdev=200111.95 00:29:14.417 lat (usec): min=1698, max=824521, avg=235476.17, stdev=202067.95 00:29:14.417 clat percentiles (msec): 00:29:14.417 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 15], 20.00th=[ 44], 00:29:14.417 | 30.00th=[ 85], 40.00th=[ 129], 50.00th=[ 169], 60.00th=[ 234], 00:29:14.417 | 70.00th=[ 321], 80.00th=[ 443], 90.00th=[ 550], 95.00th=[ 600], 00:29:14.417 | 99.00th=[ 726], 99.50th=[ 751], 99.90th=[ 793], 99.95th=[ 810], 00:29:14.417 | 99.99th=[ 810] 00:29:14.417 bw ( KiB/s): min=17920, max=251904, per=7.80%, avg=68633.60, stdev=53706.33, samples=20 00:29:14.417 iops : min= 70, max= 984, avg=268.10, stdev=209.79, samples=20 00:29:14.417 lat (msec) : 2=0.15%, 4=0.95%, 10=2.26%, 20=9.77%, 50=8.71% 00:29:14.417 lat (msec) : 100=11.26%, 250=32.03%, 500=22.05%, 750=12.32%, 1000=0.51% 00:29:14.417 cpu : usr=0.17%, sys=0.99%, ctx=867, majf=0, minf=4097 00:29:14.417 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:29:14.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:14.418 issued rwts: total=2744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.418 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:14.418 job7: (groupid=0, jobs=1): err= 0: pid=2768049: Wed Nov 20 17:56:12 2024 00:29:14.418 read: IOPS=237, BW=59.4MiB/s (62.3MB/s)(601MiB/10114msec) 00:29:14.418 slat (usec): min=12, max=151459, avg=3664.93, stdev=12418.70 00:29:14.418 clat (msec): min=11, max=758, avg=265.42, stdev=127.41 00:29:14.418 lat (msec): min=11, max=837, avg=269.09, stdev=129.19 00:29:14.418 clat percentiles (msec): 00:29:14.418 | 1.00th=[ 14], 5.00th=[ 92], 10.00th=[ 140], 20.00th=[ 171], 00:29:14.418 | 30.00th=[ 201], 40.00th=[ 228], 50.00th=[ 266], 60.00th=[ 284], 00:29:14.418 | 70.00th=[ 296], 80.00th=[ 313], 90.00th=[ 451], 95.00th=[ 558], 00:29:14.418 | 99.00th=[ 634], 99.50th=[ 718], 99.90th=[ 760], 99.95th=[ 760], 00:29:14.418 | 99.99th=[ 760] 00:29:14.418 bw ( KiB/s): min=18432, max=139264, per=6.81%, avg=59878.40, stdev=26909.29, samples=20 00:29:14.418 iops : min= 72, max= 544, avg=233.90, stdev=105.11, samples=20 00:29:14.418 lat (msec) : 20=2.00%, 50=1.21%, 100=2.50%, 250=40.12%, 500=46.36% 00:29:14.418 lat (msec) : 750=7.70%, 1000=0.12% 00:29:14.418 cpu : usr=0.10%, sys=0.82%, ctx=473, majf=0, minf=4097 00:29:14.418 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:29:14.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:14.418 issued rwts: total=2403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.418 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:14.418 job8: (groupid=0, jobs=1): err= 0: pid=2768078: Wed Nov 20 17:56:12 2024 00:29:14.418 read: IOPS=470, BW=118MiB/s (123MB/s)(1182MiB/10044msec) 00:29:14.418 slat (usec): min=10, max=91045, avg=1722.46, stdev=5947.29 00:29:14.418 clat (msec): min=2, max=509, avg=134.07, stdev=91.35 00:29:14.418 lat (msec): min=2, max=530, avg=135.79, stdev=92.33 00:29:14.418 clat percentiles (msec): 00:29:14.418 | 1.00th=[ 16], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 55], 00:29:14.418 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 120], 60.00th=[ 153], 00:29:14.418 | 70.00th=[ 186], 80.00th=[ 209], 90.00th=[ 251], 95.00th=[ 300], 00:29:14.418 | 99.00th=[ 422], 99.50th=[ 443], 99.90th=[ 498], 99.95th=[ 498], 00:29:14.418 | 99.99th=[ 510] 00:29:14.418 bw ( KiB/s): min=52736, max=289792, per=13.57%, avg=119398.40, stdev=76956.24, samples=20 00:29:14.418 iops : min= 206, max= 1132, avg=466.40, stdev=300.61, samples=20 00:29:14.418 lat (msec) : 4=0.44%, 10=0.25%, 20=1.10%, 50=6.41%, 100=39.48% 00:29:14.418 lat (msec) : 250=42.18%, 500=10.09%, 750=0.04% 00:29:14.418 cpu : usr=0.15%, sys=1.80%, ctx=930, majf=0, minf=4097 00:29:14.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:29:14.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:14.418 issued rwts: total=4727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.418 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:14.418 job9: (groupid=0, jobs=1): err= 0: pid=2768091: Wed Nov 20 17:56:12 2024 00:29:14.418 read: IOPS=275, BW=68.8MiB/s (72.1MB/s)(695MiB/10112msec) 00:29:14.418 slat (usec): min=12, max=496214, avg=2445.76, stdev=14971.36 00:29:14.418 clat (msec): min=2, max=906, avg=229.88, stdev=185.92 00:29:14.418 lat (msec): min=2, max=906, avg=232.32, stdev=187.20 00:29:14.418 clat percentiles (msec): 00:29:14.418 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 23], 20.00th=[ 53], 00:29:14.418 | 30.00th=[ 86], 40.00th=[ 167], 50.00th=[ 203], 60.00th=[ 236], 00:29:14.418 | 70.00th=[ 296], 80.00th=[ 372], 90.00th=[ 477], 95.00th=[ 558], 00:29:14.418 | 99.00th=[ 844], 99.50th=[ 869], 99.90th=[ 902], 99.95th=[ 911], 00:29:14.418 | 99.99th=[ 911] 00:29:14.418 bw ( KiB/s): min=22060, max=250880, per=7.91%, avg=69557.40, stdev=53821.76, samples=20 00:29:14.418 iops : min= 86, max= 980, avg=271.70, stdev=210.25, samples=20 00:29:14.418 lat (msec) : 4=3.34%, 10=1.26%, 20=3.78%, 50=11.00%, 100=12.19% 00:29:14.418 lat (msec) : 250=30.96%, 500=28.62%, 750=6.11%, 1000=2.73% 00:29:14.418 cpu : usr=0.17%, sys=1.04%, ctx=915, majf=0, minf=4097 00:29:14.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:29:14.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:14.418 issued rwts: total=2781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.418 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:14.418 job10: (groupid=0, jobs=1): err= 0: pid=2768102: Wed Nov 20 17:56:12 2024 00:29:14.418 read: IOPS=391, BW=97.8MiB/s (103MB/s)(984MiB/10061msec) 00:29:14.418 slat (usec): min=8, max=425951, avg=2283.18, stdev=10976.11 00:29:14.418 clat (msec): min=8, max=877, avg=161.06, stdev=110.19 00:29:14.418 lat (msec): min=8, max=892, avg=163.34, stdev=111.85 00:29:14.418 clat percentiles (msec): 00:29:14.418 | 1.00th=[ 24], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 51], 00:29:14.418 | 30.00th=[ 63], 40.00th=[ 121], 50.00th=[ 161], 60.00th=[ 186], 00:29:14.418 | 70.00th=[ 220], 80.00th=[ 243], 90.00th=[ 275], 95.00th=[ 309], 00:29:14.418 | 99.00th=[ 535], 99.50th=[ 575], 99.90th=[ 659], 99.95th=[ 877], 00:29:14.418 | 99.99th=[ 877] 00:29:14.418 bw ( KiB/s): min=31232, max=341504, per=11.27%, avg=99128.80, stdev=70709.51, samples=20 00:29:14.418 iops : min= 122, max= 1334, avg=387.20, stdev=276.22, samples=20 00:29:14.418 lat (msec) : 10=0.03%, 20=0.64%, 50=18.30%, 100=17.81%, 250=45.64% 00:29:14.418 lat (msec) : 500=15.73%, 750=1.80%, 1000=0.05% 00:29:14.418 cpu : usr=0.15%, sys=1.22%, ctx=729, majf=0, minf=4097 00:29:14.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:29:14.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:14.418 issued rwts: total=3935,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.418 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:14.418 00:29:14.418 Run status group 0 (all jobs): 00:29:14.418 READ: bw=859MiB/s (901MB/s), 50.0MiB/s-119MiB/s (52.4MB/s-124MB/s), io=8696MiB (9118MB), run=10044-10121msec 00:29:14.418 00:29:14.418 Disk stats (read/write): 00:29:14.418 nvme0n1: ios=4007/0, merge=0/0, ticks=1250979/0, in_queue=1250979, util=96.39% 00:29:14.418 nvme10n1: ios=6417/0, merge=0/0, ticks=1241695/0, in_queue=1241695, util=96.50% 00:29:14.418 nvme1n1: ios=9318/0, merge=0/0, ticks=1214925/0, in_queue=1214925, util=96.90% 00:29:14.418 nvme2n1: ios=4031/0, merge=0/0, ticks=1238267/0, in_queue=1238267, util=97.19% 00:29:14.418 nvme3n1: ios=4118/0, merge=0/0, ticks=1244748/0, in_queue=1244748, util=97.26% 00:29:14.418 nvme4n1: ios=7874/0, merge=0/0, ticks=1243821/0, in_queue=1243821, util=97.71% 00:29:14.418 nvme5n1: ios=5457/0, merge=0/0, ticks=1252334/0, in_queue=1252334, util=98.02% 00:29:14.418 nvme6n1: ios=4720/0, merge=0/0, ticks=1241868/0, in_queue=1241868, util=98.07% 00:29:14.418 nvme7n1: ios=8974/0, merge=0/0, ticks=1227385/0, in_queue=1227385, util=98.56% 00:29:14.418 nvme8n1: ios=5465/0, merge=0/0, ticks=1243832/0, in_queue=1243832, util=98.81% 00:29:14.418 nvme9n1: ios=7604/0, merge=0/0, ticks=1220990/0, in_queue=1220990, util=99.09% 00:29:14.418 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:29:14.418 [global] 00:29:14.418 thread=1 00:29:14.418 invalidate=1 00:29:14.418 rw=randwrite 00:29:14.418 time_based=1 00:29:14.418 runtime=10 00:29:14.418 ioengine=libaio 00:29:14.418 direct=1 00:29:14.418 bs=262144 00:29:14.418 iodepth=64 00:29:14.418 norandommap=1 00:29:14.418 numjobs=1 00:29:14.418 00:29:14.418 [job0] 00:29:14.418 filename=/dev/nvme0n1 00:29:14.418 [job1] 00:29:14.418 filename=/dev/nvme10n1 00:29:14.418 [job2] 00:29:14.418 filename=/dev/nvme1n1 00:29:14.418 [job3] 00:29:14.418 filename=/dev/nvme2n1 00:29:14.418 [job4] 00:29:14.418 filename=/dev/nvme3n1 00:29:14.418 [job5] 00:29:14.418 filename=/dev/nvme4n1 00:29:14.418 [job6] 00:29:14.418 filename=/dev/nvme5n1 00:29:14.418 [job7] 00:29:14.418 filename=/dev/nvme6n1 00:29:14.418 [job8] 00:29:14.418 filename=/dev/nvme7n1 00:29:14.418 [job9] 00:29:14.418 filename=/dev/nvme8n1 00:29:14.418 [job10] 00:29:14.418 filename=/dev/nvme9n1 00:29:14.418 Could not set queue depth (nvme0n1) 00:29:14.418 Could not set queue depth (nvme10n1) 00:29:14.418 Could not set queue depth (nvme1n1) 00:29:14.418 Could not set queue depth (nvme2n1) 00:29:14.418 Could not set queue depth (nvme3n1) 00:29:14.418 Could not set queue depth (nvme4n1) 00:29:14.418 Could not set queue depth (nvme5n1) 00:29:14.418 Could not set queue depth (nvme6n1) 00:29:14.418 Could not set queue depth (nvme7n1) 00:29:14.418 Could not set queue depth (nvme8n1) 00:29:14.418 Could not set queue depth (nvme9n1) 00:29:14.418 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:14.418 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:14.418 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:14.418 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:14.418 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:14.418 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:14.418 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:14.418 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:14.418 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:14.418 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:14.418 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:14.418 fio-3.35 00:29:14.418 Starting 11 threads 00:29:24.421 00:29:24.421 job0: (groupid=0, jobs=1): err= 0: pid=2769467: Wed Nov 20 17:56:23 2024 00:29:24.421 write: IOPS=542, BW=136MiB/s (142MB/s)(1370MiB/10100msec); 0 zone resets 00:29:24.421 slat (usec): min=20, max=94337, avg=1430.28, stdev=3642.47 00:29:24.421 clat (msec): min=2, max=362, avg=116.48, stdev=58.94 00:29:24.421 lat (msec): min=2, max=362, avg=117.91, stdev=59.45 00:29:24.421 clat percentiles (msec): 00:29:24.421 | 1.00th=[ 16], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 59], 00:29:24.421 | 30.00th=[ 89], 40.00th=[ 104], 50.00th=[ 111], 60.00th=[ 125], 00:29:24.421 | 70.00th=[ 144], 80.00th=[ 150], 90.00th=[ 182], 95.00th=[ 222], 00:29:24.421 | 99.00th=[ 309], 99.50th=[ 321], 99.90th=[ 347], 99.95th=[ 355], 00:29:24.421 | 99.99th=[ 363] 00:29:24.421 bw ( KiB/s): min=61440, max=241152, per=10.50%, avg=138664.45, stdev=54312.38, samples=20 00:29:24.421 iops : min= 240, max= 942, avg=541.65, stdev=212.16, samples=20 00:29:24.421 lat (msec) : 4=0.07%, 10=0.26%, 20=1.35%, 50=13.98%, 100=20.19% 00:29:24.421 lat (msec) : 250=60.32%, 500=3.83% 00:29:24.421 cpu : usr=1.31%, sys=1.75%, ctx=2224, majf=0, minf=1 00:29:24.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:29:24.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:24.421 issued rwts: total=0,5479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.421 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:24.421 job1: (groupid=0, jobs=1): err= 0: pid=2769502: Wed Nov 20 17:56:23 2024 00:29:24.421 write: IOPS=379, BW=94.8MiB/s (99.4MB/s)(957MiB/10100msec); 0 zone resets 00:29:24.421 slat (usec): min=24, max=144809, avg=2548.77, stdev=5375.88 00:29:24.421 clat (msec): min=3, max=340, avg=166.20, stdev=55.79 00:29:24.421 lat (msec): min=3, max=340, avg=168.75, stdev=56.50 00:29:24.421 clat percentiles (msec): 00:29:24.421 | 1.00th=[ 39], 5.00th=[ 75], 10.00th=[ 97], 20.00th=[ 136], 00:29:24.421 | 30.00th=[ 144], 40.00th=[ 148], 50.00th=[ 155], 60.00th=[ 167], 00:29:24.421 | 70.00th=[ 184], 80.00th=[ 218], 90.00th=[ 253], 95.00th=[ 268], 00:29:24.421 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 326], 99.95th=[ 342], 00:29:24.421 | 99.99th=[ 342] 00:29:24.421 bw ( KiB/s): min=49053, max=166912, per=7.30%, avg=96404.65, stdev=27001.16, samples=20 00:29:24.421 iops : min= 191, max= 652, avg=376.55, stdev=105.53, samples=20 00:29:24.421 lat (msec) : 4=0.03%, 10=0.10%, 20=0.26%, 50=2.12%, 100=8.54% 00:29:24.421 lat (msec) : 250=78.09%, 500=10.86% 00:29:24.421 cpu : usr=0.97%, sys=1.25%, ctx=1066, majf=0, minf=1 00:29:24.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:29:24.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:24.421 issued rwts: total=0,3829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.421 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:24.421 job2: (groupid=0, jobs=1): err= 0: pid=2769522: Wed Nov 20 17:56:23 2024 00:29:24.421 write: IOPS=450, BW=113MiB/s (118MB/s)(1137MiB/10095msec); 0 zone resets 00:29:24.421 slat (usec): min=16, max=36908, avg=2040.00, stdev=4469.44 00:29:24.421 clat (msec): min=4, max=394, avg=139.93, stdev=76.28 00:29:24.421 lat (msec): min=4, max=394, avg=141.97, stdev=77.38 00:29:24.421 clat percentiles (msec): 00:29:24.421 | 1.00th=[ 18], 5.00th=[ 45], 10.00th=[ 60], 20.00th=[ 91], 00:29:24.421 | 30.00th=[ 102], 40.00th=[ 109], 50.00th=[ 122], 60.00th=[ 130], 00:29:24.421 | 70.00th=[ 134], 80.00th=[ 205], 90.00th=[ 264], 95.00th=[ 300], 00:29:24.421 | 99.00th=[ 376], 99.50th=[ 388], 99.90th=[ 393], 99.95th=[ 393], 00:29:24.421 | 99.99th=[ 393] 00:29:24.421 bw ( KiB/s): min=47104, max=222720, per=8.70%, avg=114841.60, stdev=47676.98, samples=20 00:29:24.421 iops : min= 184, max= 870, avg=448.60, stdev=186.24, samples=20 00:29:24.421 lat (msec) : 10=0.37%, 20=0.66%, 50=6.00%, 100=20.69%, 250=59.55% 00:29:24.421 lat (msec) : 500=12.73% 00:29:24.421 cpu : usr=1.08%, sys=1.28%, ctx=1485, majf=0, minf=1 00:29:24.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:29:24.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:24.421 issued rwts: total=0,4549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.421 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:24.421 job3: (groupid=0, jobs=1): err= 0: pid=2769534: Wed Nov 20 17:56:23 2024 00:29:24.421 write: IOPS=458, BW=115MiB/s (120MB/s)(1157MiB/10103msec); 0 zone resets 00:29:24.421 slat (usec): min=25, max=81979, avg=1911.13, stdev=4173.75 00:29:24.421 clat (msec): min=5, max=368, avg=137.72, stdev=56.81 00:29:24.421 lat (msec): min=5, max=371, avg=139.63, stdev=57.63 00:29:24.421 clat percentiles (msec): 00:29:24.421 | 1.00th=[ 17], 5.00th=[ 39], 10.00th=[ 63], 20.00th=[ 90], 00:29:24.421 | 30.00th=[ 108], 40.00th=[ 136], 50.00th=[ 144], 60.00th=[ 146], 00:29:24.421 | 70.00th=[ 153], 80.00th=[ 180], 90.00th=[ 220], 95.00th=[ 236], 00:29:24.421 | 99.00th=[ 279], 99.50th=[ 317], 99.90th=[ 351], 99.95th=[ 355], 00:29:24.421 | 99.99th=[ 368] 00:29:24.421 bw ( KiB/s): min=67584, max=180224, per=8.85%, avg=116889.60, stdev=31468.59, samples=20 00:29:24.421 iops : min= 264, max= 704, avg=456.60, stdev=122.92, samples=20 00:29:24.421 lat (msec) : 10=0.17%, 20=1.06%, 50=6.20%, 100=18.41%, 250=72.22% 00:29:24.421 lat (msec) : 500=1.94% 00:29:24.421 cpu : usr=1.09%, sys=1.44%, ctx=1786, majf=0, minf=1 00:29:24.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:29:24.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:24.421 issued rwts: total=0,4629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.421 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:24.421 job4: (groupid=0, jobs=1): err= 0: pid=2769540: Wed Nov 20 17:56:23 2024 00:29:24.421 write: IOPS=413, BW=103MiB/s (108MB/s)(1044MiB/10092msec); 0 zone resets 00:29:24.421 slat (usec): min=26, max=35086, avg=2121.44, stdev=4448.35 00:29:24.421 clat (msec): min=15, max=376, avg=152.47, stdev=63.45 00:29:24.421 lat (msec): min=15, max=376, avg=154.59, stdev=64.14 00:29:24.421 clat percentiles (msec): 00:29:24.421 | 1.00th=[ 41], 5.00th=[ 79], 10.00th=[ 89], 20.00th=[ 103], 00:29:24.421 | 30.00th=[ 122], 40.00th=[ 129], 50.00th=[ 132], 60.00th=[ 142], 00:29:24.421 | 70.00th=[ 161], 80.00th=[ 211], 90.00th=[ 255], 95.00th=[ 275], 00:29:24.421 | 99.00th=[ 342], 99.50th=[ 355], 99.90th=[ 376], 99.95th=[ 376], 00:29:24.421 | 99.99th=[ 376] 00:29:24.421 bw ( KiB/s): min=56320, max=169984, per=7.97%, avg=105318.40, stdev=32115.22, samples=20 00:29:24.421 iops : min= 220, max= 664, avg=411.40, stdev=125.45, samples=20 00:29:24.421 lat (msec) : 20=0.10%, 50=1.99%, 100=16.40%, 250=70.46%, 500=11.06% 00:29:24.421 cpu : usr=0.78%, sys=1.40%, ctx=1416, majf=0, minf=1 00:29:24.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:29:24.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:24.421 issued rwts: total=0,4177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.421 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:24.421 job5: (groupid=0, jobs=1): err= 0: pid=2769564: Wed Nov 20 17:56:23 2024 00:29:24.421 write: IOPS=275, BW=68.8MiB/s (72.2MB/s)(697MiB/10129msec); 0 zone resets 00:29:24.421 slat (usec): min=33, max=139565, avg=3323.62, stdev=6882.05 00:29:24.421 clat (msec): min=22, max=379, avg=229.01, stdev=60.26 00:29:24.421 lat (msec): min=22, max=379, avg=232.33, stdev=60.87 00:29:24.421 clat percentiles (msec): 00:29:24.421 | 1.00th=[ 45], 5.00th=[ 125], 10.00th=[ 150], 20.00th=[ 178], 00:29:24.421 | 30.00th=[ 215], 40.00th=[ 230], 50.00th=[ 241], 60.00th=[ 247], 00:29:24.421 | 70.00th=[ 253], 80.00th=[ 268], 90.00th=[ 296], 95.00th=[ 326], 00:29:24.421 | 99.00th=[ 363], 99.50th=[ 376], 99.90th=[ 380], 99.95th=[ 380], 00:29:24.421 | 99.99th=[ 380] 00:29:24.421 bw ( KiB/s): min=47104, max=106496, per=5.28%, avg=69785.60, stdev=13682.15, samples=20 00:29:24.421 iops : min= 184, max= 416, avg=272.60, stdev=53.45, samples=20 00:29:24.421 lat (msec) : 50=1.36%, 100=1.65%, 250=64.25%, 500=32.74% 00:29:24.421 cpu : usr=0.61%, sys=0.81%, ctx=884, majf=0, minf=1 00:29:24.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.7% 00:29:24.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:24.421 issued rwts: total=0,2789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.421 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:24.422 job6: (groupid=0, jobs=1): err= 0: pid=2769576: Wed Nov 20 17:56:23 2024 00:29:24.422 write: IOPS=321, BW=80.4MiB/s (84.3MB/s)(815MiB/10130msec); 0 zone resets 00:29:24.422 slat (usec): min=23, max=40218, avg=2791.55, stdev=5672.67 00:29:24.422 clat (msec): min=13, max=378, avg=196.07, stdev=76.43 00:29:24.422 lat (msec): min=15, max=383, avg=198.86, stdev=77.34 00:29:24.422 clat percentiles (msec): 00:29:24.422 | 1.00th=[ 24], 5.00th=[ 67], 10.00th=[ 86], 20.00th=[ 109], 00:29:24.422 | 30.00th=[ 153], 40.00th=[ 201], 50.00th=[ 220], 60.00th=[ 230], 00:29:24.422 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 279], 95.00th=[ 313], 00:29:24.422 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 372], 99.95th=[ 376], 00:29:24.422 | 99.99th=[ 380] 00:29:24.422 bw ( KiB/s): min=51200, max=162304, per=6.19%, avg=81792.00, stdev=29205.44, samples=20 00:29:24.422 iops : min= 200, max= 634, avg=319.50, stdev=114.08, samples=20 00:29:24.422 lat (msec) : 20=0.43%, 50=2.64%, 100=13.01%, 250=64.71%, 500=19.21% 00:29:24.422 cpu : usr=0.67%, sys=1.02%, ctx=1067, majf=0, minf=1 00:29:24.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:29:24.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:24.422 issued rwts: total=0,3259,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.422 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:24.422 job7: (groupid=0, jobs=1): err= 0: pid=2769581: Wed Nov 20 17:56:23 2024 00:29:24.422 write: IOPS=1153, BW=288MiB/s (302MB/s)(2893MiB/10032msec); 0 zone resets 00:29:24.422 slat (usec): min=10, max=174149, avg=859.14, stdev=2971.55 00:29:24.422 clat (msec): min=2, max=308, avg=54.59, stdev=27.60 00:29:24.422 lat (msec): min=2, max=309, avg=55.45, stdev=27.86 00:29:24.422 clat percentiles (msec): 00:29:24.422 | 1.00th=[ 35], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 42], 00:29:24.422 | 30.00th=[ 44], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 51], 00:29:24.422 | 70.00th=[ 56], 80.00th=[ 60], 90.00th=[ 72], 95.00th=[ 95], 00:29:24.422 | 99.00th=[ 199], 99.50th=[ 271], 99.90th=[ 309], 99.95th=[ 309], 00:29:24.422 | 99.99th=[ 309] 00:29:24.422 bw ( KiB/s): min=114688, max=407040, per=22.31%, avg=294656.00, stdev=85672.94, samples=20 00:29:24.422 iops : min= 448, max= 1590, avg=1151.00, stdev=334.66, samples=20 00:29:24.422 lat (msec) : 4=0.03%, 10=0.10%, 20=0.24%, 50=58.16%, 100=36.99% 00:29:24.422 lat (msec) : 250=3.77%, 500=0.71% 00:29:24.422 cpu : usr=2.42%, sys=3.53%, ctx=2717, majf=0, minf=1 00:29:24.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:29:24.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:24.422 issued rwts: total=0,11573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.422 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:24.422 job8: (groupid=0, jobs=1): err= 0: pid=2769582: Wed Nov 20 17:56:23 2024 00:29:24.422 write: IOPS=460, BW=115MiB/s (121MB/s)(1167MiB/10128msec); 0 zone resets 00:29:24.422 slat (usec): min=21, max=51948, avg=1976.02, stdev=4418.47 00:29:24.422 clat (usec): min=1242, max=374355, avg=136827.78, stdev=76890.40 00:29:24.422 lat (usec): min=1302, max=374393, avg=138803.80, stdev=77952.54 00:29:24.422 clat percentiles (msec): 00:29:24.422 | 1.00th=[ 10], 5.00th=[ 46], 10.00th=[ 63], 20.00th=[ 69], 00:29:24.422 | 30.00th=[ 79], 40.00th=[ 94], 50.00th=[ 111], 60.00th=[ 144], 00:29:24.422 | 70.00th=[ 180], 80.00th=[ 230], 90.00th=[ 245], 95.00th=[ 249], 00:29:24.422 | 99.00th=[ 342], 99.50th=[ 359], 99.90th=[ 368], 99.95th=[ 368], 00:29:24.422 | 99.99th=[ 376] 00:29:24.422 bw ( KiB/s): min=57344, max=231936, per=8.92%, avg=117862.40, stdev=55357.74, samples=20 00:29:24.422 iops : min= 224, max= 906, avg=460.40, stdev=216.24, samples=20 00:29:24.422 lat (msec) : 2=0.17%, 4=0.11%, 10=0.73%, 20=1.03%, 50=3.79% 00:29:24.422 lat (msec) : 100=36.28%, 250=53.61%, 500=4.29% 00:29:24.422 cpu : usr=1.07%, sys=1.39%, ctx=1578, majf=0, minf=1 00:29:24.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:29:24.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:24.422 issued rwts: total=0,4667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.422 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:24.422 job9: (groupid=0, jobs=1): err= 0: pid=2769583: Wed Nov 20 17:56:23 2024 00:29:24.422 write: IOPS=351, BW=87.9MiB/s (92.1MB/s)(890MiB/10130msec); 0 zone resets 00:29:24.422 slat (usec): min=24, max=247958, avg=2738.80, stdev=7728.96 00:29:24.422 clat (msec): min=8, max=571, avg=179.26, stdev=85.84 00:29:24.422 lat (msec): min=10, max=571, avg=182.00, stdev=86.81 00:29:24.422 clat percentiles (msec): 00:29:24.422 | 1.00th=[ 42], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 81], 00:29:24.422 | 30.00th=[ 136], 40.00th=[ 146], 50.00th=[ 190], 60.00th=[ 224], 00:29:24.422 | 70.00th=[ 234], 80.00th=[ 245], 90.00th=[ 257], 95.00th=[ 309], 00:29:24.422 | 99.00th=[ 401], 99.50th=[ 514], 99.90th=[ 575], 99.95th=[ 575], 00:29:24.422 | 99.99th=[ 575] 00:29:24.422 bw ( KiB/s): min=53248, max=245248, per=6.78%, avg=89523.20, stdev=45851.06, samples=20 00:29:24.422 iops : min= 208, max= 958, avg=349.70, stdev=179.11, samples=20 00:29:24.422 lat (msec) : 10=0.03%, 20=0.48%, 50=0.90%, 100=21.40%, 250=64.83% 00:29:24.422 lat (msec) : 500=11.80%, 750=0.56% 00:29:24.422 cpu : usr=0.72%, sys=1.05%, ctx=885, majf=0, minf=1 00:29:24.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:29:24.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:24.422 issued rwts: total=0,3560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.422 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:24.422 job10: (groupid=0, jobs=1): err= 0: pid=2769584: Wed Nov 20 17:56:23 2024 00:29:24.422 write: IOPS=371, BW=92.8MiB/s (97.3MB/s)(937MiB/10093msec); 0 zone resets 00:29:24.422 slat (usec): min=23, max=95142, avg=2371.91, stdev=5348.32 00:29:24.422 clat (msec): min=8, max=416, avg=169.92, stdev=74.81 00:29:24.422 lat (msec): min=8, max=416, avg=172.29, stdev=75.69 00:29:24.422 clat percentiles (msec): 00:29:24.422 | 1.00th=[ 29], 5.00th=[ 80], 10.00th=[ 90], 20.00th=[ 122], 00:29:24.422 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 142], 60.00th=[ 165], 00:29:24.422 | 70.00th=[ 197], 80.00th=[ 243], 90.00th=[ 279], 95.00th=[ 313], 00:29:24.422 | 99.00th=[ 376], 99.50th=[ 397], 99.90th=[ 418], 99.95th=[ 418], 00:29:24.422 | 99.99th=[ 418] 00:29:24.422 bw ( KiB/s): min=40960, max=156672, per=7.14%, avg=94336.00, stdev=29611.26, samples=20 00:29:24.422 iops : min= 160, max= 612, avg=368.50, stdev=115.67, samples=20 00:29:24.422 lat (msec) : 10=0.05%, 20=0.40%, 50=1.73%, 100=11.77%, 250=67.98% 00:29:24.422 lat (msec) : 500=18.06% 00:29:24.422 cpu : usr=0.79%, sys=1.14%, ctx=1291, majf=0, minf=1 00:29:24.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:29:24.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:24.422 issued rwts: total=0,3748,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.422 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:24.422 00:29:24.422 Run status group 0 (all jobs): 00:29:24.422 WRITE: bw=1290MiB/s (1352MB/s), 68.8MiB/s-288MiB/s (72.2MB/s-302MB/s), io=12.8GiB (13.7GB), run=10032-10130msec 00:29:24.422 00:29:24.422 Disk stats (read/write): 00:29:24.422 nvme0n1: ios=46/10933, merge=0/0, ticks=3087/1230721, in_queue=1233808, util=99.99% 00:29:24.422 nvme10n1: ios=41/7630, merge=0/0, ticks=2818/1227829, in_queue=1230647, util=100.00% 00:29:24.422 nvme1n1: ios=20/9085, merge=0/0, ticks=852/1231969, in_queue=1232821, util=98.38% 00:29:24.422 nvme2n1: ios=0/9229, merge=0/0, ticks=0/1233379, in_queue=1233379, util=97.22% 00:29:24.422 nvme3n1: ios=0/8342, merge=0/0, ticks=0/1234377, in_queue=1234377, util=97.33% 00:29:24.422 nvme4n1: ios=0/5516, merge=0/0, ticks=0/1226081, in_queue=1226081, util=97.77% 00:29:24.422 nvme5n1: ios=0/6451, merge=0/0, ticks=0/1225933, in_queue=1225933, util=97.98% 00:29:24.422 nvme6n1: ios=44/22508, merge=0/0, ticks=2729/1148119, in_queue=1150848, util=100.00% 00:29:24.422 nvme7n1: ios=42/9273, merge=0/0, ticks=531/1224874, in_queue=1225405, util=100.00% 00:29:24.422 nvme8n1: ios=39/7057, merge=0/0, ticks=2256/1189930, in_queue=1192186, util=100.00% 00:29:24.422 nvme9n1: ios=0/7482, merge=0/0, ticks=0/1233526, in_queue=1233526, util=99.11% 00:29:24.422 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:29:24.422 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:29:24.422 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:24.422 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:24.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:24.422 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:29:24.422 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:24.422 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:24.422 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:29:24.422 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:24.422 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:29:24.422 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:24.422 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:24.422 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.422 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:24.422 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.422 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:24.422 17:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:29:24.422 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:29:24.422 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:29:24.423 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:24.423 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:24.423 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:29:24.423 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:24.423 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:29:24.423 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:24.423 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:24.423 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.423 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:24.423 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.423 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:24.423 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:29:24.684 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:29:24.684 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:29:24.684 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:24.684 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:24.684 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:29:24.684 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:24.684 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:29:24.946 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:24.946 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:24.946 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.946 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:24.946 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.946 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:24.946 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:29:25.207 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:29:25.207 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:29:25.207 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:25.207 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:25.207 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:29:25.207 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:25.207 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:29:25.207 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:25.207 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:29:25.207 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.207 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:25.207 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.207 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:25.207 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:29:25.207 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:29:25.207 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:29:25.207 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:25.207 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:25.207 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:29:25.207 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:29:25.207 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:29:25.468 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:25.468 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:29:25.729 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:29:25.729 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:29:25.729 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:25.729 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:25.729 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:29:25.729 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:25.729 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:29:25.729 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:25.729 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:29:25.729 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.729 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:25.729 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.729 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:25.729 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:29:25.989 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:29:25.989 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:29:25.989 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:25.989 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:25.989 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:29:25.989 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:25.989 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:29:25.989 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:25.989 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:29:25.990 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.990 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:25.990 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.990 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:25.990 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:29:26.250 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:29:26.250 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:29:26.250 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:26.250 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:26.250 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:29:26.250 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:26.250 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:29:26.250 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:26.250 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:29:26.250 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.250 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:26.250 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.250 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:26.250 17:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:29:26.250 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:29:26.250 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:29:26.250 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:26.250 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:26.250 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:29:26.250 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:26.250 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:29:26.250 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:26.250 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:29:26.250 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.250 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:26.250 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.250 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:26.250 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:29:26.511 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:26.511 rmmod nvme_tcp 00:29:26.511 rmmod nvme_fabrics 00:29:26.511 rmmod nvme_keyring 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 2759483 ']' 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 2759483 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 2759483 ']' 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 2759483 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2759483 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2759483' 00:29:26.511 killing process with pid 2759483 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 2759483 00:29:26.511 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 2759483 00:29:26.772 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:26.772 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:26.772 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:26.772 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:29:26.772 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:29:26.772 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:26.772 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:29:26.772 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.772 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:26.772 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.772 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.772 17:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:29.314 00:29:29.314 real 1m18.129s 00:29:29.314 user 5m0.145s 00:29:29.314 sys 0m17.060s 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:29.314 ************************************ 00:29:29.314 END TEST nvmf_multiconnection 00:29:29.314 ************************************ 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:29.314 ************************************ 00:29:29.314 START TEST nvmf_initiator_timeout 00:29:29.314 ************************************ 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:29:29.314 * Looking for test storage... 00:29:29.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:29.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.314 --rc genhtml_branch_coverage=1 00:29:29.314 --rc genhtml_function_coverage=1 00:29:29.314 --rc genhtml_legend=1 00:29:29.314 --rc geninfo_all_blocks=1 00:29:29.314 --rc geninfo_unexecuted_blocks=1 00:29:29.314 00:29:29.314 ' 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:29.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.314 --rc genhtml_branch_coverage=1 00:29:29.314 --rc genhtml_function_coverage=1 00:29:29.314 --rc genhtml_legend=1 00:29:29.314 --rc geninfo_all_blocks=1 00:29:29.314 --rc geninfo_unexecuted_blocks=1 00:29:29.314 00:29:29.314 ' 00:29:29.314 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:29.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.314 --rc genhtml_branch_coverage=1 00:29:29.314 --rc genhtml_function_coverage=1 00:29:29.314 --rc genhtml_legend=1 00:29:29.315 --rc geninfo_all_blocks=1 00:29:29.315 --rc geninfo_unexecuted_blocks=1 00:29:29.315 00:29:29.315 ' 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:29.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.315 --rc genhtml_branch_coverage=1 00:29:29.315 --rc genhtml_function_coverage=1 00:29:29.315 --rc genhtml_legend=1 00:29:29.315 --rc geninfo_all_blocks=1 00:29:29.315 --rc geninfo_unexecuted_blocks=1 00:29:29.315 00:29:29.315 ' 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.315 17:56:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:29.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.315 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:37.449 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:37.449 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:29:37.449 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:37.449 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:37.449 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:37.449 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:37.449 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:37.449 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:29:37.449 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:37.449 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:29:37.449 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:29:37.449 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:29:37.449 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:29:37.449 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:29:37.449 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:29:37.449 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:37.449 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:37.449 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:37.449 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:37.449 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:37.450 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:37.450 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:37.450 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:37.450 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # is_hw=yes 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:37.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:29:37.450 00:29:37.450 --- 10.0.0.2 ping statistics --- 00:29:37.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.450 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:37.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:29:37.450 00:29:37.450 --- 10.0.0.1 ping statistics --- 00:29:37.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.450 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # return 0 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=2775793 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 2775793 00:29:37.450 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:37.451 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 2775793 ']' 00:29:37.451 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.451 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:37.451 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.451 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:37.451 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:37.451 [2024-11-20 17:56:36.424186] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:37.451 [2024-11-20 17:56:36.424258] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.451 [2024-11-20 17:56:36.511761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:37.451 [2024-11-20 17:56:36.557789] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.451 [2024-11-20 17:56:36.557841] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.451 [2024-11-20 17:56:36.557850] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.451 [2024-11-20 17:56:36.557857] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.451 [2024-11-20 17:56:36.557863] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.451 [2024-11-20 17:56:36.558011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.451 [2024-11-20 17:56:36.558184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.451 [2024-11-20 17:56:36.558350] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:37.451 [2024-11-20 17:56:36.558440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:37.451 Malloc0 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:37.451 Delay0 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:37.451 [2024-11-20 17:56:37.303129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:37.451 [2024-11-20 17:56:37.343412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.451 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:39.365 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:29:39.365 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:29:39.365 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:29:39.365 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:29:39.365 17:56:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:29:41.278 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:29:41.278 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:29:41.278 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:29:41.278 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:29:41.278 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:29:41.278 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:29:41.278 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2776656 00:29:41.278 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:29:41.278 17:56:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:29:41.278 [global] 00:29:41.278 thread=1 00:29:41.278 invalidate=1 00:29:41.278 rw=write 00:29:41.278 time_based=1 00:29:41.278 runtime=60 00:29:41.278 ioengine=libaio 00:29:41.278 direct=1 00:29:41.278 bs=4096 00:29:41.278 iodepth=1 00:29:41.278 norandommap=0 00:29:41.278 numjobs=1 00:29:41.278 00:29:41.278 verify_dump=1 00:29:41.278 verify_backlog=512 00:29:41.278 verify_state_save=0 00:29:41.278 do_verify=1 00:29:41.278 verify=crc32c-intel 00:29:41.278 [job0] 00:29:41.278 filename=/dev/nvme0n1 00:29:41.278 Could not set queue depth (nvme0n1) 00:29:41.537 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:41.537 fio-3.35 00:29:41.537 Starting 1 thread 00:29:44.188 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:29:44.189 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.189 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:44.189 true 00:29:44.189 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.189 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:29:44.189 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.189 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:44.189 true 00:29:44.189 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.189 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:29:44.189 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.189 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:44.189 true 00:29:44.189 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.189 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:29:44.189 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.189 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:44.189 true 00:29:44.189 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.189 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:47.485 true 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:47.485 true 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:47.485 true 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:47.485 true 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:29:47.485 17:56:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2776656 00:30:43.747 00:30:43.747 job0: (groupid=0, jobs=1): err= 0: pid=2776971: Wed Nov 20 17:57:41 2024 00:30:43.747 read: IOPS=10, BW=42.4KiB/s (43.5kB/s)(2548KiB/60031msec) 00:30:43.747 slat (usec): min=7, max=2606, avg=31.06, stdev=102.26 00:30:43.747 clat (usec): min=356, max=41993k, avg=93184.72, stdev=1662846.84 00:30:43.747 lat (usec): min=382, max=41993k, avg=93215.78, stdev=1662846.72 00:30:43.747 clat percentiles (usec): 00:30:43.747 | 1.00th=[ 486], 5.00th=[ 578], 10.00th=[ 594], 00:30:43.747 | 20.00th=[ 644], 30.00th=[ 791], 40.00th=[ 41681], 00:30:43.747 | 50.00th=[ 41681], 60.00th=[ 42206], 70.00th=[ 42206], 00:30:43.747 | 80.00th=[ 42206], 90.00th=[ 42206], 95.00th=[ 42206], 00:30:43.747 | 99.00th=[ 43254], 99.50th=[ 43254], 99.90th=[17112761], 00:30:43.747 | 99.95th=[17112761], 99.99th=[17112761] 00:30:43.747 write: IOPS=17, BW=68.2KiB/s (69.9kB/s)(4096KiB/60031msec); 0 zone resets 00:30:43.747 slat (usec): min=9, max=29866, avg=59.58, stdev=932.43 00:30:43.747 clat (usec): min=210, max=918, avg=562.27, stdev=98.91 00:30:43.747 lat (usec): min=246, max=30586, avg=621.84, stdev=943.05 00:30:43.747 clat percentiles (usec): 00:30:43.747 | 1.00th=[ 330], 5.00th=[ 392], 10.00th=[ 437], 20.00th=[ 478], 00:30:43.747 | 30.00th=[ 523], 40.00th=[ 537], 50.00th=[ 562], 60.00th=[ 586], 00:30:43.747 | 70.00th=[ 619], 80.00th=[ 644], 90.00th=[ 685], 95.00th=[ 717], 00:30:43.747 | 99.00th=[ 783], 99.50th=[ 832], 99.90th=[ 889], 99.95th=[ 922], 00:30:43.747 | 99.99th=[ 922] 00:30:43.747 bw ( KiB/s): min= 1064, max= 4096, per=100.00%, avg=2730.67, stdev=1538.30, samples=3 00:30:43.747 iops : min= 266, max= 1024, avg=682.67, stdev=384.57, samples=3 00:30:43.747 lat (usec) : 250=0.06%, 500=15.23%, 750=54.73%, 1000=5.06% 00:30:43.747 lat (msec) : 2=0.18%, 50=24.68%, >=2000=0.06% 00:30:43.747 cpu : usr=0.05%, sys=0.11%, ctx=1664, majf=0, minf=1 00:30:43.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:43.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.747 issued rwts: total=637,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:43.747 00:30:43.747 Run status group 0 (all jobs): 00:30:43.747 READ: bw=42.4KiB/s (43.5kB/s), 42.4KiB/s-42.4KiB/s (43.5kB/s-43.5kB/s), io=2548KiB (2609kB), run=60031-60031msec 00:30:43.747 WRITE: bw=68.2KiB/s (69.9kB/s), 68.2KiB/s-68.2KiB/s (69.9kB/s-69.9kB/s), io=4096KiB (4194kB), run=60031-60031msec 00:30:43.747 00:30:43.747 Disk stats (read/write): 00:30:43.747 nvme0n1: ios=685/1024, merge=0/0, ticks=18763/452, in_queue=19215, util=99.82% 00:30:43.747 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:43.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:43.747 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:43.747 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:30:43.747 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:30:43.747 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:43.747 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:30:43.747 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:43.747 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:30:43.747 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:30:43.747 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:30:43.747 nvmf hotplug test: fio successful as expected 00:30:43.747 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:43.747 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.747 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:43.747 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.747 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:30:43.747 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:43.748 rmmod nvme_tcp 00:30:43.748 rmmod nvme_fabrics 00:30:43.748 rmmod nvme_keyring 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 2775793 ']' 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 2775793 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 2775793 ']' 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 2775793 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2775793 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2775793' 00:30:43.748 killing process with pid 2775793 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 2775793 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 2775793 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.748 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.318 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:44.318 00:30:44.318 real 1m15.205s 00:30:44.318 user 4m35.903s 00:30:44.318 sys 0m7.503s 00:30:44.318 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:44.318 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:44.318 ************************************ 00:30:44.318 END TEST nvmf_initiator_timeout 00:30:44.318 ************************************ 00:30:44.318 17:57:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:30:44.318 17:57:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:30:44.318 17:57:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:30:44.318 17:57:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:30:44.318 17:57:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:52.461 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:52.461 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:52.461 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:52.461 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:52.461 ************************************ 00:30:52.461 START TEST nvmf_perf_adq 00:30:52.461 ************************************ 00:30:52.461 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:30:52.462 * Looking for test storage... 00:30:52.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:52.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.462 --rc genhtml_branch_coverage=1 00:30:52.462 --rc genhtml_function_coverage=1 00:30:52.462 --rc genhtml_legend=1 00:30:52.462 --rc geninfo_all_blocks=1 00:30:52.462 --rc geninfo_unexecuted_blocks=1 00:30:52.462 00:30:52.462 ' 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:52.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.462 --rc genhtml_branch_coverage=1 00:30:52.462 --rc genhtml_function_coverage=1 00:30:52.462 --rc genhtml_legend=1 00:30:52.462 --rc geninfo_all_blocks=1 00:30:52.462 --rc geninfo_unexecuted_blocks=1 00:30:52.462 00:30:52.462 ' 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:52.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.462 --rc genhtml_branch_coverage=1 00:30:52.462 --rc genhtml_function_coverage=1 00:30:52.462 --rc genhtml_legend=1 00:30:52.462 --rc geninfo_all_blocks=1 00:30:52.462 --rc geninfo_unexecuted_blocks=1 00:30:52.462 00:30:52.462 ' 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:52.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.462 --rc genhtml_branch_coverage=1 00:30:52.462 --rc genhtml_function_coverage=1 00:30:52.462 --rc genhtml_legend=1 00:30:52.462 --rc geninfo_all_blocks=1 00:30:52.462 --rc geninfo_unexecuted_blocks=1 00:30:52.462 00:30:52.462 ' 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.462 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.463 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:52.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:52.463 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:52.463 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:52.463 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:52.463 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:30:52.463 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:30:52.463 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:59.052 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:59.052 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:59.053 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:59.053 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:59.053 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:30:59.053 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:31:00.436 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:31:02.979 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:31:08.267 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:08.268 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:08.268 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:08.268 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:08.268 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:08.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:08.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:31:08.268 00:31:08.268 --- 10.0.0.2 ping statistics --- 00:31:08.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.268 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:08.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:08.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:31:08.268 00:31:08.268 --- 10.0.0.1 ping statistics --- 00:31:08.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.268 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=2798291 00:31:08.268 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 2798291 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2798291 ']' 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.269 [2024-11-20 17:58:07.711788] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:31:08.269 [2024-11-20 17:58:07.711854] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.269 [2024-11-20 17:58:07.779087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:08.269 [2024-11-20 17:58:07.824168] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.269 [2024-11-20 17:58:07.824219] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.269 [2024-11-20 17:58:07.824225] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.269 [2024-11-20 17:58:07.824231] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.269 [2024-11-20 17:58:07.824235] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.269 [2024-11-20 17:58:07.826183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.269 [2024-11-20 17:58:07.826395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:08.269 [2024-11-20 17:58:07.826552] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.269 [2024-11-20 17:58:07.826552] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.269 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.269 [2024-11-20 17:58:08.124714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.269 Malloc1 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.269 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.529 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.529 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:08.530 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.530 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:08.530 [2024-11-20 17:58:08.190427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.530 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.530 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2798360 00:31:08.530 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:31:08.530 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:10.443 17:58:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:31:10.443 17:58:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.443 17:58:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:10.443 17:58:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.443 17:58:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:31:10.443 "tick_rate": 2400000000, 00:31:10.443 "poll_groups": [ 00:31:10.443 { 00:31:10.443 "name": "nvmf_tgt_poll_group_000", 00:31:10.443 "admin_qpairs": 1, 00:31:10.443 "io_qpairs": 1, 00:31:10.443 "current_admin_qpairs": 1, 00:31:10.443 "current_io_qpairs": 1, 00:31:10.443 "pending_bdev_io": 0, 00:31:10.443 "completed_nvme_io": 16781, 00:31:10.443 "transports": [ 00:31:10.443 { 00:31:10.443 "trtype": "TCP" 00:31:10.443 } 00:31:10.443 ] 00:31:10.443 }, 00:31:10.443 { 00:31:10.443 "name": "nvmf_tgt_poll_group_001", 00:31:10.443 "admin_qpairs": 0, 00:31:10.443 "io_qpairs": 1, 00:31:10.443 "current_admin_qpairs": 0, 00:31:10.443 "current_io_qpairs": 1, 00:31:10.443 "pending_bdev_io": 0, 00:31:10.443 "completed_nvme_io": 20166, 00:31:10.443 "transports": [ 00:31:10.443 { 00:31:10.443 "trtype": "TCP" 00:31:10.443 } 00:31:10.443 ] 00:31:10.443 }, 00:31:10.443 { 00:31:10.443 "name": "nvmf_tgt_poll_group_002", 00:31:10.443 "admin_qpairs": 0, 00:31:10.443 "io_qpairs": 1, 00:31:10.443 "current_admin_qpairs": 0, 00:31:10.443 "current_io_qpairs": 1, 00:31:10.443 "pending_bdev_io": 0, 00:31:10.443 "completed_nvme_io": 17404, 00:31:10.443 "transports": [ 00:31:10.443 { 00:31:10.443 "trtype": "TCP" 00:31:10.443 } 00:31:10.443 ] 00:31:10.443 }, 00:31:10.443 { 00:31:10.443 "name": "nvmf_tgt_poll_group_003", 00:31:10.443 "admin_qpairs": 0, 00:31:10.443 "io_qpairs": 1, 00:31:10.443 "current_admin_qpairs": 0, 00:31:10.443 "current_io_qpairs": 1, 00:31:10.443 "pending_bdev_io": 0, 00:31:10.443 "completed_nvme_io": 19066, 00:31:10.443 "transports": [ 00:31:10.443 { 00:31:10.443 "trtype": "TCP" 00:31:10.443 } 00:31:10.443 ] 00:31:10.443 } 00:31:10.443 ] 00:31:10.443 }' 00:31:10.443 17:58:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:31:10.443 17:58:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:31:10.443 17:58:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:31:10.443 17:58:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:31:10.443 17:58:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2798360 00:31:18.583 Initializing NVMe Controllers 00:31:18.583 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:18.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:31:18.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:31:18.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:31:18.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:31:18.583 Initialization complete. Launching workers. 00:31:18.583 ======================================================== 00:31:18.583 Latency(us) 00:31:18.583 Device Information : IOPS MiB/s Average min max 00:31:18.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13264.33 51.81 4825.16 1224.82 11411.37 00:31:18.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13582.92 53.06 4711.42 915.66 11502.82 00:31:18.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12414.43 48.49 5154.97 1330.36 13468.84 00:31:18.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12930.63 50.51 4949.36 1134.55 11839.27 00:31:18.583 ======================================================== 00:31:18.583 Total : 52192.31 203.88 4904.78 915.66 13468.84 00:31:18.583 00:31:18.583 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:31:18.583 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:18.583 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:31:18.583 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:18.584 rmmod nvme_tcp 00:31:18.584 rmmod nvme_fabrics 00:31:18.584 rmmod nvme_keyring 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 2798291 ']' 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 2798291 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2798291 ']' 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2798291 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2798291 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2798291' 00:31:18.584 killing process with pid 2798291 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2798291 00:31:18.584 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2798291 00:31:18.844 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:18.844 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:18.844 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:18.844 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:31:18.844 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:31:18.844 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:18.844 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:31:18.844 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:18.844 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:18.844 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.844 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.844 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.755 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:20.755 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:31:20.755 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:31:20.755 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:31:22.665 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:31:24.578 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:31:29.868 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:31:29.868 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:29.868 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:29.868 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:29.868 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:29.869 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:29.869 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:29.869 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:29.869 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:29.869 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:29.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:29.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:31:29.870 00:31:29.870 --- 10.0.0.2 ping statistics --- 00:31:29.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.870 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:29.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:29.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:31:29.870 00:31:29.870 --- 10.0.0.1 ping statistics --- 00:31:29.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.870 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:31:29.870 net.core.busy_poll = 1 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:31:29.870 net.core.busy_read = 1 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:31:29.870 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:31:30.131 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:31:30.131 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:31:30.131 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:31:30.131 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:30.131 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:30.131 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:30.131 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:30.131 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=2802893 00:31:30.131 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 2802893 00:31:30.131 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:30.131 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2802893 ']' 00:31:30.131 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.131 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:30.131 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.131 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:30.131 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:30.131 [2024-11-20 17:58:29.972407] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:31:30.131 [2024-11-20 17:58:29.972476] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.392 [2024-11-20 17:58:30.063928] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:30.392 [2024-11-20 17:58:30.116371] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.392 [2024-11-20 17:58:30.116426] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.392 [2024-11-20 17:58:30.116435] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:30.392 [2024-11-20 17:58:30.116443] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:30.392 [2024-11-20 17:58:30.116449] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.392 [2024-11-20 17:58:30.116645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.392 [2024-11-20 17:58:30.116803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:30.392 [2024-11-20 17:58:30.116963] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.392 [2024-11-20 17:58:30.116965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:30.964 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:30.964 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:31:30.964 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:30.964 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:30.964 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:30.964 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.964 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:31:30.964 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:31:30.964 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:31:30.964 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.964 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:30.964 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.225 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:31:31.225 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:31:31.225 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.225 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:31.225 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.225 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:31:31.225 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.225 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:31.225 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.225 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:31:31.225 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.225 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:31.225 [2024-11-20 17:58:31.004165] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:31.225 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.225 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:31.225 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.225 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:31.225 Malloc1 00:31:31.225 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.225 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:31.225 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.225 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:31.225 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.225 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:31.225 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.225 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:31.225 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.226 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:31.226 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.226 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:31.226 [2024-11-20 17:58:31.069693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:31.226 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.226 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2803088 00:31:31.226 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:31:31.226 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:33.773 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:31:33.773 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.773 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:33.773 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.773 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:31:33.773 "tick_rate": 2400000000, 00:31:33.773 "poll_groups": [ 00:31:33.773 { 00:31:33.773 "name": "nvmf_tgt_poll_group_000", 00:31:33.773 "admin_qpairs": 1, 00:31:33.773 "io_qpairs": 4, 00:31:33.773 "current_admin_qpairs": 1, 00:31:33.773 "current_io_qpairs": 4, 00:31:33.773 "pending_bdev_io": 0, 00:31:33.773 "completed_nvme_io": 33874, 00:31:33.773 "transports": [ 00:31:33.773 { 00:31:33.773 "trtype": "TCP" 00:31:33.773 } 00:31:33.773 ] 00:31:33.773 }, 00:31:33.773 { 00:31:33.773 "name": "nvmf_tgt_poll_group_001", 00:31:33.773 "admin_qpairs": 0, 00:31:33.773 "io_qpairs": 0, 00:31:33.773 "current_admin_qpairs": 0, 00:31:33.773 "current_io_qpairs": 0, 00:31:33.773 "pending_bdev_io": 0, 00:31:33.773 "completed_nvme_io": 0, 00:31:33.773 "transports": [ 00:31:33.773 { 00:31:33.773 "trtype": "TCP" 00:31:33.773 } 00:31:33.773 ] 00:31:33.773 }, 00:31:33.773 { 00:31:33.773 "name": "nvmf_tgt_poll_group_002", 00:31:33.773 "admin_qpairs": 0, 00:31:33.773 "io_qpairs": 0, 00:31:33.773 "current_admin_qpairs": 0, 00:31:33.773 "current_io_qpairs": 0, 00:31:33.773 "pending_bdev_io": 0, 00:31:33.773 "completed_nvme_io": 0, 00:31:33.773 "transports": [ 00:31:33.773 { 00:31:33.773 "trtype": "TCP" 00:31:33.773 } 00:31:33.773 ] 00:31:33.773 }, 00:31:33.773 { 00:31:33.773 "name": "nvmf_tgt_poll_group_003", 00:31:33.773 "admin_qpairs": 0, 00:31:33.773 "io_qpairs": 0, 00:31:33.773 "current_admin_qpairs": 0, 00:31:33.773 "current_io_qpairs": 0, 00:31:33.773 "pending_bdev_io": 0, 00:31:33.773 "completed_nvme_io": 0, 00:31:33.773 "transports": [ 00:31:33.773 { 00:31:33.773 "trtype": "TCP" 00:31:33.773 } 00:31:33.773 ] 00:31:33.773 } 00:31:33.773 ] 00:31:33.773 }' 00:31:33.773 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:31:33.773 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:31:33.773 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:31:33.773 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:31:33.773 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2803088 00:31:41.934 Initializing NVMe Controllers 00:31:41.934 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:41.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:31:41.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:31:41.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:31:41.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:31:41.934 Initialization complete. Launching workers. 00:31:41.934 ======================================================== 00:31:41.934 Latency(us) 00:31:41.934 Device Information : IOPS MiB/s Average min max 00:31:41.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6003.20 23.45 10686.75 1273.47 61391.41 00:31:41.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6295.10 24.59 10169.54 1145.67 59212.83 00:31:41.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6034.00 23.57 10626.28 1262.05 58426.17 00:31:41.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6085.70 23.77 10540.73 1257.28 59155.46 00:31:41.934 ======================================================== 00:31:41.934 Total : 24418.00 95.38 10502.08 1145.67 61391.41 00:31:41.934 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:41.934 rmmod nvme_tcp 00:31:41.934 rmmod nvme_fabrics 00:31:41.934 rmmod nvme_keyring 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 2802893 ']' 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 2802893 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2802893 ']' 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2802893 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2802893 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2802893' 00:31:41.934 killing process with pid 2802893 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2802893 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2802893 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:41.934 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.902 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:43.902 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:31:43.902 00:31:43.902 real 0m52.459s 00:31:43.902 user 2m47.928s 00:31:43.902 sys 0m10.964s 00:31:43.902 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:43.902 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:43.902 ************************************ 00:31:43.902 END TEST nvmf_perf_adq 00:31:43.902 ************************************ 00:31:43.902 17:58:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:31:43.902 17:58:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:43.902 17:58:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:43.902 17:58:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:43.902 ************************************ 00:31:43.902 START TEST nvmf_shutdown 00:31:43.902 ************************************ 00:31:43.902 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:31:43.902 * Looking for test storage... 00:31:43.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:43.902 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:43.902 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:31:43.902 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:44.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.164 --rc genhtml_branch_coverage=1 00:31:44.164 --rc genhtml_function_coverage=1 00:31:44.164 --rc genhtml_legend=1 00:31:44.164 --rc geninfo_all_blocks=1 00:31:44.164 --rc geninfo_unexecuted_blocks=1 00:31:44.164 00:31:44.164 ' 00:31:44.164 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:44.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.165 --rc genhtml_branch_coverage=1 00:31:44.165 --rc genhtml_function_coverage=1 00:31:44.165 --rc genhtml_legend=1 00:31:44.165 --rc geninfo_all_blocks=1 00:31:44.165 --rc geninfo_unexecuted_blocks=1 00:31:44.165 00:31:44.165 ' 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:44.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.165 --rc genhtml_branch_coverage=1 00:31:44.165 --rc genhtml_function_coverage=1 00:31:44.165 --rc genhtml_legend=1 00:31:44.165 --rc geninfo_all_blocks=1 00:31:44.165 --rc geninfo_unexecuted_blocks=1 00:31:44.165 00:31:44.165 ' 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:44.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.165 --rc genhtml_branch_coverage=1 00:31:44.165 --rc genhtml_function_coverage=1 00:31:44.165 --rc genhtml_legend=1 00:31:44.165 --rc geninfo_all_blocks=1 00:31:44.165 --rc geninfo_unexecuted_blocks=1 00:31:44.165 00:31:44.165 ' 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:44.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@169 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:44.165 ************************************ 00:31:44.165 START TEST nvmf_shutdown_tc1 00:31:44.165 ************************************ 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:44.165 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:44.166 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:44.166 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.166 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:44.166 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:44.166 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:44.166 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:44.166 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:31:44.166 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:52.312 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:52.312 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:52.313 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:52.313 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:52.313 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # is_hw=yes 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:52.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:52.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:31:52.313 00:31:52.313 --- 10.0.0.2 ping statistics --- 00:31:52.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:52.313 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:52.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:52.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:31:52.313 00:31:52.313 --- 10.0.0.1 ping statistics --- 00:31:52.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:52.313 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # return 0 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # nvmfpid=2809334 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # waitforlisten 2809334 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2809334 ']' 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:52.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:52.313 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:52.313 [2024-11-20 17:58:51.568795] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:31:52.313 [2024-11-20 17:58:51.568862] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:52.313 [2024-11-20 17:58:51.658791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:52.313 [2024-11-20 17:58:51.706595] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:52.313 [2024-11-20 17:58:51.706665] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:52.313 [2024-11-20 17:58:51.706673] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:52.313 [2024-11-20 17:58:51.706680] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:52.314 [2024-11-20 17:58:51.706686] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:52.314 [2024-11-20 17:58:51.706846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:52.314 [2024-11-20 17:58:51.707001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:52.314 [2024-11-20 17:58:51.707252] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:31:52.314 [2024-11-20 17:58:51.707253] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:52.575 [2024-11-20 17:58:52.451632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:52.575 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:52.837 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:52.837 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:52.837 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:52.837 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:52.837 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:52.837 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:52.837 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:52.837 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:52.837 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:52.837 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:52.837 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:52.837 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:52.837 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:52.837 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:52.837 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:31:52.837 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.837 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:52.837 Malloc1 00:31:52.837 [2024-11-20 17:58:52.569150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:52.837 Malloc2 00:31:52.837 Malloc3 00:31:52.837 Malloc4 00:31:52.837 Malloc5 00:31:53.099 Malloc6 00:31:53.099 Malloc7 00:31:53.099 Malloc8 00:31:53.099 Malloc9 00:31:53.099 Malloc10 00:31:53.099 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.099 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:31:53.099 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:53.099 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2809580 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2809580 /var/tmp/bdevperf.sock 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2809580 ']' 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:53.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:53.361 { 00:31:53.361 "params": { 00:31:53.361 "name": "Nvme$subsystem", 00:31:53.361 "trtype": "$TEST_TRANSPORT", 00:31:53.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.361 "adrfam": "ipv4", 00:31:53.361 "trsvcid": "$NVMF_PORT", 00:31:53.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.361 "hdgst": ${hdgst:-false}, 00:31:53.361 "ddgst": ${ddgst:-false} 00:31:53.361 }, 00:31:53.361 "method": "bdev_nvme_attach_controller" 00:31:53.361 } 00:31:53.361 EOF 00:31:53.361 )") 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:53.361 { 00:31:53.361 "params": { 00:31:53.361 "name": "Nvme$subsystem", 00:31:53.361 "trtype": "$TEST_TRANSPORT", 00:31:53.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.361 "adrfam": "ipv4", 00:31:53.361 "trsvcid": "$NVMF_PORT", 00:31:53.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.361 "hdgst": ${hdgst:-false}, 00:31:53.361 "ddgst": ${ddgst:-false} 00:31:53.361 }, 00:31:53.361 "method": "bdev_nvme_attach_controller" 00:31:53.361 } 00:31:53.361 EOF 00:31:53.361 )") 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:53.361 { 00:31:53.361 "params": { 00:31:53.361 "name": "Nvme$subsystem", 00:31:53.361 "trtype": "$TEST_TRANSPORT", 00:31:53.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.361 "adrfam": "ipv4", 00:31:53.361 "trsvcid": "$NVMF_PORT", 00:31:53.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.361 "hdgst": ${hdgst:-false}, 00:31:53.361 "ddgst": ${ddgst:-false} 00:31:53.361 }, 00:31:53.361 "method": "bdev_nvme_attach_controller" 00:31:53.361 } 00:31:53.361 EOF 00:31:53.361 )") 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:53.361 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:53.361 { 00:31:53.361 "params": { 00:31:53.361 "name": "Nvme$subsystem", 00:31:53.361 "trtype": "$TEST_TRANSPORT", 00:31:53.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.361 "adrfam": "ipv4", 00:31:53.361 "trsvcid": "$NVMF_PORT", 00:31:53.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.361 "hdgst": ${hdgst:-false}, 00:31:53.361 "ddgst": ${ddgst:-false} 00:31:53.361 }, 00:31:53.361 "method": "bdev_nvme_attach_controller" 00:31:53.361 } 00:31:53.361 EOF 00:31:53.361 )") 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:53.362 { 00:31:53.362 "params": { 00:31:53.362 "name": "Nvme$subsystem", 00:31:53.362 "trtype": "$TEST_TRANSPORT", 00:31:53.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.362 "adrfam": "ipv4", 00:31:53.362 "trsvcid": "$NVMF_PORT", 00:31:53.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.362 "hdgst": ${hdgst:-false}, 00:31:53.362 "ddgst": ${ddgst:-false} 00:31:53.362 }, 00:31:53.362 "method": "bdev_nvme_attach_controller" 00:31:53.362 } 00:31:53.362 EOF 00:31:53.362 )") 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:53.362 { 00:31:53.362 "params": { 00:31:53.362 "name": "Nvme$subsystem", 00:31:53.362 "trtype": "$TEST_TRANSPORT", 00:31:53.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.362 "adrfam": "ipv4", 00:31:53.362 "trsvcid": "$NVMF_PORT", 00:31:53.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.362 "hdgst": ${hdgst:-false}, 00:31:53.362 "ddgst": ${ddgst:-false} 00:31:53.362 }, 00:31:53.362 "method": "bdev_nvme_attach_controller" 00:31:53.362 } 00:31:53.362 EOF 00:31:53.362 )") 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:53.362 [2024-11-20 17:58:53.080579] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:31:53.362 [2024-11-20 17:58:53.080649] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:53.362 { 00:31:53.362 "params": { 00:31:53.362 "name": "Nvme$subsystem", 00:31:53.362 "trtype": "$TEST_TRANSPORT", 00:31:53.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.362 "adrfam": "ipv4", 00:31:53.362 "trsvcid": "$NVMF_PORT", 00:31:53.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.362 "hdgst": ${hdgst:-false}, 00:31:53.362 "ddgst": ${ddgst:-false} 00:31:53.362 }, 00:31:53.362 "method": "bdev_nvme_attach_controller" 00:31:53.362 } 00:31:53.362 EOF 00:31:53.362 )") 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:53.362 { 00:31:53.362 "params": { 00:31:53.362 "name": "Nvme$subsystem", 00:31:53.362 "trtype": "$TEST_TRANSPORT", 00:31:53.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.362 "adrfam": "ipv4", 00:31:53.362 "trsvcid": "$NVMF_PORT", 00:31:53.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.362 "hdgst": ${hdgst:-false}, 00:31:53.362 "ddgst": ${ddgst:-false} 00:31:53.362 }, 00:31:53.362 "method": "bdev_nvme_attach_controller" 00:31:53.362 } 00:31:53.362 EOF 00:31:53.362 )") 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:53.362 { 00:31:53.362 "params": { 00:31:53.362 "name": "Nvme$subsystem", 00:31:53.362 "trtype": "$TEST_TRANSPORT", 00:31:53.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.362 "adrfam": "ipv4", 00:31:53.362 "trsvcid": "$NVMF_PORT", 00:31:53.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.362 "hdgst": ${hdgst:-false}, 00:31:53.362 "ddgst": ${ddgst:-false} 00:31:53.362 }, 00:31:53.362 "method": "bdev_nvme_attach_controller" 00:31:53.362 } 00:31:53.362 EOF 00:31:53.362 )") 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:53.362 { 00:31:53.362 "params": { 00:31:53.362 "name": "Nvme$subsystem", 00:31:53.362 "trtype": "$TEST_TRANSPORT", 00:31:53.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.362 "adrfam": "ipv4", 00:31:53.362 "trsvcid": "$NVMF_PORT", 00:31:53.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.362 "hdgst": ${hdgst:-false}, 00:31:53.362 "ddgst": ${ddgst:-false} 00:31:53.362 }, 00:31:53.362 "method": "bdev_nvme_attach_controller" 00:31:53.362 } 00:31:53.362 EOF 00:31:53.362 )") 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:31:53.362 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:31:53.362 "params": { 00:31:53.362 "name": "Nvme1", 00:31:53.362 "trtype": "tcp", 00:31:53.362 "traddr": "10.0.0.2", 00:31:53.362 "adrfam": "ipv4", 00:31:53.362 "trsvcid": "4420", 00:31:53.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:53.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:53.362 "hdgst": false, 00:31:53.362 "ddgst": false 00:31:53.362 }, 00:31:53.362 "method": "bdev_nvme_attach_controller" 00:31:53.362 },{ 00:31:53.362 "params": { 00:31:53.362 "name": "Nvme2", 00:31:53.362 "trtype": "tcp", 00:31:53.362 "traddr": "10.0.0.2", 00:31:53.362 "adrfam": "ipv4", 00:31:53.362 "trsvcid": "4420", 00:31:53.362 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:53.362 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:53.362 "hdgst": false, 00:31:53.362 "ddgst": false 00:31:53.362 }, 00:31:53.362 "method": "bdev_nvme_attach_controller" 00:31:53.362 },{ 00:31:53.362 "params": { 00:31:53.362 "name": "Nvme3", 00:31:53.362 "trtype": "tcp", 00:31:53.362 "traddr": "10.0.0.2", 00:31:53.362 "adrfam": "ipv4", 00:31:53.362 "trsvcid": "4420", 00:31:53.362 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:53.362 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:53.362 "hdgst": false, 00:31:53.362 "ddgst": false 00:31:53.362 }, 00:31:53.362 "method": "bdev_nvme_attach_controller" 00:31:53.362 },{ 00:31:53.362 "params": { 00:31:53.362 "name": "Nvme4", 00:31:53.362 "trtype": "tcp", 00:31:53.362 "traddr": "10.0.0.2", 00:31:53.362 "adrfam": "ipv4", 00:31:53.362 "trsvcid": "4420", 00:31:53.362 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:53.362 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:53.362 "hdgst": false, 00:31:53.362 "ddgst": false 00:31:53.362 }, 00:31:53.362 "method": "bdev_nvme_attach_controller" 00:31:53.362 },{ 00:31:53.362 "params": { 00:31:53.362 "name": "Nvme5", 00:31:53.362 "trtype": "tcp", 00:31:53.362 "traddr": "10.0.0.2", 00:31:53.362 "adrfam": "ipv4", 00:31:53.362 "trsvcid": "4420", 00:31:53.362 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:53.362 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:53.362 "hdgst": false, 00:31:53.362 "ddgst": false 00:31:53.362 }, 00:31:53.362 "method": "bdev_nvme_attach_controller" 00:31:53.362 },{ 00:31:53.362 "params": { 00:31:53.362 "name": "Nvme6", 00:31:53.362 "trtype": "tcp", 00:31:53.362 "traddr": "10.0.0.2", 00:31:53.362 "adrfam": "ipv4", 00:31:53.362 "trsvcid": "4420", 00:31:53.362 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:53.362 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:53.362 "hdgst": false, 00:31:53.362 "ddgst": false 00:31:53.362 }, 00:31:53.362 "method": "bdev_nvme_attach_controller" 00:31:53.362 },{ 00:31:53.362 "params": { 00:31:53.362 "name": "Nvme7", 00:31:53.362 "trtype": "tcp", 00:31:53.362 "traddr": "10.0.0.2", 00:31:53.362 "adrfam": "ipv4", 00:31:53.362 "trsvcid": "4420", 00:31:53.362 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:53.362 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:53.362 "hdgst": false, 00:31:53.362 "ddgst": false 00:31:53.362 }, 00:31:53.362 "method": "bdev_nvme_attach_controller" 00:31:53.362 },{ 00:31:53.363 "params": { 00:31:53.363 "name": "Nvme8", 00:31:53.363 "trtype": "tcp", 00:31:53.363 "traddr": "10.0.0.2", 00:31:53.363 "adrfam": "ipv4", 00:31:53.363 "trsvcid": "4420", 00:31:53.363 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:53.363 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:53.363 "hdgst": false, 00:31:53.363 "ddgst": false 00:31:53.363 }, 00:31:53.363 "method": "bdev_nvme_attach_controller" 00:31:53.363 },{ 00:31:53.363 "params": { 00:31:53.363 "name": "Nvme9", 00:31:53.363 "trtype": "tcp", 00:31:53.363 "traddr": "10.0.0.2", 00:31:53.363 "adrfam": "ipv4", 00:31:53.363 "trsvcid": "4420", 00:31:53.363 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:53.363 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:53.363 "hdgst": false, 00:31:53.363 "ddgst": false 00:31:53.363 }, 00:31:53.363 "method": "bdev_nvme_attach_controller" 00:31:53.363 },{ 00:31:53.363 "params": { 00:31:53.363 "name": "Nvme10", 00:31:53.363 "trtype": "tcp", 00:31:53.363 "traddr": "10.0.0.2", 00:31:53.363 "adrfam": "ipv4", 00:31:53.363 "trsvcid": "4420", 00:31:53.363 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:53.363 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:53.363 "hdgst": false, 00:31:53.363 "ddgst": false 00:31:53.363 }, 00:31:53.363 "method": "bdev_nvme_attach_controller" 00:31:53.363 }' 00:31:53.363 [2024-11-20 17:58:53.165148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.363 [2024-11-20 17:58:53.212073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.278 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:55.278 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:31:55.278 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:55.278 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.278 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:55.278 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.278 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2809580 00:31:55.278 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:31:55.278 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:31:56.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2809580 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2809334 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:56.220 { 00:31:56.220 "params": { 00:31:56.220 "name": "Nvme$subsystem", 00:31:56.220 "trtype": "$TEST_TRANSPORT", 00:31:56.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.220 "adrfam": "ipv4", 00:31:56.220 "trsvcid": "$NVMF_PORT", 00:31:56.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.220 "hdgst": ${hdgst:-false}, 00:31:56.220 "ddgst": ${ddgst:-false} 00:31:56.220 }, 00:31:56.220 "method": "bdev_nvme_attach_controller" 00:31:56.220 } 00:31:56.220 EOF 00:31:56.220 )") 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:56.220 { 00:31:56.220 "params": { 00:31:56.220 "name": "Nvme$subsystem", 00:31:56.220 "trtype": "$TEST_TRANSPORT", 00:31:56.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.220 "adrfam": "ipv4", 00:31:56.220 "trsvcid": "$NVMF_PORT", 00:31:56.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.220 "hdgst": ${hdgst:-false}, 00:31:56.220 "ddgst": ${ddgst:-false} 00:31:56.220 }, 00:31:56.220 "method": "bdev_nvme_attach_controller" 00:31:56.220 } 00:31:56.220 EOF 00:31:56.220 )") 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:56.220 { 00:31:56.220 "params": { 00:31:56.220 "name": "Nvme$subsystem", 00:31:56.220 "trtype": "$TEST_TRANSPORT", 00:31:56.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.220 "adrfam": "ipv4", 00:31:56.220 "trsvcid": "$NVMF_PORT", 00:31:56.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.220 "hdgst": ${hdgst:-false}, 00:31:56.220 "ddgst": ${ddgst:-false} 00:31:56.220 }, 00:31:56.220 "method": "bdev_nvme_attach_controller" 00:31:56.220 } 00:31:56.220 EOF 00:31:56.220 )") 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:56.220 { 00:31:56.220 "params": { 00:31:56.220 "name": "Nvme$subsystem", 00:31:56.220 "trtype": "$TEST_TRANSPORT", 00:31:56.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.220 "adrfam": "ipv4", 00:31:56.220 "trsvcid": "$NVMF_PORT", 00:31:56.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.220 "hdgst": ${hdgst:-false}, 00:31:56.220 "ddgst": ${ddgst:-false} 00:31:56.220 }, 00:31:56.220 "method": "bdev_nvme_attach_controller" 00:31:56.220 } 00:31:56.220 EOF 00:31:56.220 )") 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:56.220 { 00:31:56.220 "params": { 00:31:56.220 "name": "Nvme$subsystem", 00:31:56.220 "trtype": "$TEST_TRANSPORT", 00:31:56.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.220 "adrfam": "ipv4", 00:31:56.220 "trsvcid": "$NVMF_PORT", 00:31:56.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.220 "hdgst": ${hdgst:-false}, 00:31:56.220 "ddgst": ${ddgst:-false} 00:31:56.220 }, 00:31:56.220 "method": "bdev_nvme_attach_controller" 00:31:56.220 } 00:31:56.220 EOF 00:31:56.220 )") 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:56.220 { 00:31:56.220 "params": { 00:31:56.220 "name": "Nvme$subsystem", 00:31:56.220 "trtype": "$TEST_TRANSPORT", 00:31:56.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.220 "adrfam": "ipv4", 00:31:56.220 "trsvcid": "$NVMF_PORT", 00:31:56.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.220 "hdgst": ${hdgst:-false}, 00:31:56.220 "ddgst": ${ddgst:-false} 00:31:56.220 }, 00:31:56.220 "method": "bdev_nvme_attach_controller" 00:31:56.220 } 00:31:56.220 EOF 00:31:56.220 )") 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:56.220 [2024-11-20 17:58:55.855398] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:31:56.220 [2024-11-20 17:58:55.855453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2810225 ] 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:56.220 { 00:31:56.220 "params": { 00:31:56.220 "name": "Nvme$subsystem", 00:31:56.220 "trtype": "$TEST_TRANSPORT", 00:31:56.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.220 "adrfam": "ipv4", 00:31:56.220 "trsvcid": "$NVMF_PORT", 00:31:56.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.220 "hdgst": ${hdgst:-false}, 00:31:56.220 "ddgst": ${ddgst:-false} 00:31:56.220 }, 00:31:56.220 "method": "bdev_nvme_attach_controller" 00:31:56.220 } 00:31:56.220 EOF 00:31:56.220 )") 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:56.220 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:56.220 { 00:31:56.220 "params": { 00:31:56.220 "name": "Nvme$subsystem", 00:31:56.220 "trtype": "$TEST_TRANSPORT", 00:31:56.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.221 "adrfam": "ipv4", 00:31:56.221 "trsvcid": "$NVMF_PORT", 00:31:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.221 "hdgst": ${hdgst:-false}, 00:31:56.221 "ddgst": ${ddgst:-false} 00:31:56.221 }, 00:31:56.221 "method": "bdev_nvme_attach_controller" 00:31:56.221 } 00:31:56.221 EOF 00:31:56.221 )") 00:31:56.221 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:56.221 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:56.221 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:56.221 { 00:31:56.221 "params": { 00:31:56.221 "name": "Nvme$subsystem", 00:31:56.221 "trtype": "$TEST_TRANSPORT", 00:31:56.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.221 "adrfam": "ipv4", 00:31:56.221 "trsvcid": "$NVMF_PORT", 00:31:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.221 "hdgst": ${hdgst:-false}, 00:31:56.221 "ddgst": ${ddgst:-false} 00:31:56.221 }, 00:31:56.221 "method": "bdev_nvme_attach_controller" 00:31:56.221 } 00:31:56.221 EOF 00:31:56.221 )") 00:31:56.221 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:56.221 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:56.221 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:56.221 { 00:31:56.221 "params": { 00:31:56.221 "name": "Nvme$subsystem", 00:31:56.221 "trtype": "$TEST_TRANSPORT", 00:31:56.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.221 "adrfam": "ipv4", 00:31:56.221 "trsvcid": "$NVMF_PORT", 00:31:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.221 "hdgst": ${hdgst:-false}, 00:31:56.221 "ddgst": ${ddgst:-false} 00:31:56.221 }, 00:31:56.221 "method": "bdev_nvme_attach_controller" 00:31:56.221 } 00:31:56.221 EOF 00:31:56.221 )") 00:31:56.221 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:31:56.221 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:31:56.221 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:31:56.221 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:31:56.221 "params": { 00:31:56.221 "name": "Nvme1", 00:31:56.221 "trtype": "tcp", 00:31:56.221 "traddr": "10.0.0.2", 00:31:56.221 "adrfam": "ipv4", 00:31:56.221 "trsvcid": "4420", 00:31:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:56.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:56.221 "hdgst": false, 00:31:56.221 "ddgst": false 00:31:56.221 }, 00:31:56.221 "method": "bdev_nvme_attach_controller" 00:31:56.221 },{ 00:31:56.221 "params": { 00:31:56.221 "name": "Nvme2", 00:31:56.221 "trtype": "tcp", 00:31:56.221 "traddr": "10.0.0.2", 00:31:56.221 "adrfam": "ipv4", 00:31:56.221 "trsvcid": "4420", 00:31:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:56.221 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:56.221 "hdgst": false, 00:31:56.221 "ddgst": false 00:31:56.221 }, 00:31:56.221 "method": "bdev_nvme_attach_controller" 00:31:56.221 },{ 00:31:56.221 "params": { 00:31:56.221 "name": "Nvme3", 00:31:56.221 "trtype": "tcp", 00:31:56.221 "traddr": "10.0.0.2", 00:31:56.221 "adrfam": "ipv4", 00:31:56.221 "trsvcid": "4420", 00:31:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:56.221 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:56.221 "hdgst": false, 00:31:56.221 "ddgst": false 00:31:56.221 }, 00:31:56.221 "method": "bdev_nvme_attach_controller" 00:31:56.221 },{ 00:31:56.221 "params": { 00:31:56.221 "name": "Nvme4", 00:31:56.221 "trtype": "tcp", 00:31:56.221 "traddr": "10.0.0.2", 00:31:56.221 "adrfam": "ipv4", 00:31:56.221 "trsvcid": "4420", 00:31:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:56.221 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:56.221 "hdgst": false, 00:31:56.221 "ddgst": false 00:31:56.221 }, 00:31:56.221 "method": "bdev_nvme_attach_controller" 00:31:56.221 },{ 00:31:56.221 "params": { 00:31:56.221 "name": "Nvme5", 00:31:56.221 "trtype": "tcp", 00:31:56.221 "traddr": "10.0.0.2", 00:31:56.221 "adrfam": "ipv4", 00:31:56.221 "trsvcid": "4420", 00:31:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:56.221 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:56.221 "hdgst": false, 00:31:56.221 "ddgst": false 00:31:56.221 }, 00:31:56.221 "method": "bdev_nvme_attach_controller" 00:31:56.221 },{ 00:31:56.221 "params": { 00:31:56.221 "name": "Nvme6", 00:31:56.221 "trtype": "tcp", 00:31:56.221 "traddr": "10.0.0.2", 00:31:56.221 "adrfam": "ipv4", 00:31:56.221 "trsvcid": "4420", 00:31:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:56.221 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:56.221 "hdgst": false, 00:31:56.221 "ddgst": false 00:31:56.221 }, 00:31:56.221 "method": "bdev_nvme_attach_controller" 00:31:56.221 },{ 00:31:56.221 "params": { 00:31:56.221 "name": "Nvme7", 00:31:56.221 "trtype": "tcp", 00:31:56.221 "traddr": "10.0.0.2", 00:31:56.221 "adrfam": "ipv4", 00:31:56.221 "trsvcid": "4420", 00:31:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:56.221 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:56.221 "hdgst": false, 00:31:56.221 "ddgst": false 00:31:56.221 }, 00:31:56.221 "method": "bdev_nvme_attach_controller" 00:31:56.221 },{ 00:31:56.221 "params": { 00:31:56.221 "name": "Nvme8", 00:31:56.221 "trtype": "tcp", 00:31:56.221 "traddr": "10.0.0.2", 00:31:56.221 "adrfam": "ipv4", 00:31:56.221 "trsvcid": "4420", 00:31:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:56.221 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:56.221 "hdgst": false, 00:31:56.221 "ddgst": false 00:31:56.221 }, 00:31:56.221 "method": "bdev_nvme_attach_controller" 00:31:56.221 },{ 00:31:56.221 "params": { 00:31:56.221 "name": "Nvme9", 00:31:56.221 "trtype": "tcp", 00:31:56.221 "traddr": "10.0.0.2", 00:31:56.221 "adrfam": "ipv4", 00:31:56.221 "trsvcid": "4420", 00:31:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:56.221 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:56.221 "hdgst": false, 00:31:56.221 "ddgst": false 00:31:56.221 }, 00:31:56.221 "method": "bdev_nvme_attach_controller" 00:31:56.221 },{ 00:31:56.221 "params": { 00:31:56.221 "name": "Nvme10", 00:31:56.221 "trtype": "tcp", 00:31:56.221 "traddr": "10.0.0.2", 00:31:56.221 "adrfam": "ipv4", 00:31:56.221 "trsvcid": "4420", 00:31:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:56.221 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:56.221 "hdgst": false, 00:31:56.221 "ddgst": false 00:31:56.221 }, 00:31:56.221 "method": "bdev_nvme_attach_controller" 00:31:56.221 }' 00:31:56.221 [2024-11-20 17:58:55.935348] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.221 [2024-11-20 17:58:55.966154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.604 Running I/O for 1 seconds... 00:31:58.806 1864.00 IOPS, 116.50 MiB/s 00:31:58.806 Latency(us) 00:31:58.806 [2024-11-20T16:58:58.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.806 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:58.806 Verification LBA range: start 0x0 length 0x400 00:31:58.806 Nvme1n1 : 1.12 228.40 14.27 0.00 0.00 277421.87 23702.19 248162.99 00:31:58.806 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:58.806 Verification LBA range: start 0x0 length 0x400 00:31:58.806 Nvme2n1 : 1.14 224.16 14.01 0.00 0.00 277023.79 18677.76 249910.61 00:31:58.806 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:58.806 Verification LBA range: start 0x0 length 0x400 00:31:58.806 Nvme3n1 : 1.08 237.98 14.87 0.00 0.00 256535.89 11468.80 237677.23 00:31:58.806 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:58.806 Verification LBA range: start 0x0 length 0x400 00:31:58.806 Nvme4n1 : 1.08 237.56 14.85 0.00 0.00 252157.01 14636.37 269134.51 00:31:58.806 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:58.806 Verification LBA range: start 0x0 length 0x400 00:31:58.806 Nvme5n1 : 1.11 233.92 14.62 0.00 0.00 247016.74 19770.03 237677.23 00:31:58.806 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:58.806 Verification LBA range: start 0x0 length 0x400 00:31:58.806 Nvme6n1 : 1.11 229.84 14.37 0.00 0.00 251878.61 17476.27 235929.60 00:31:58.806 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:58.806 Verification LBA range: start 0x0 length 0x400 00:31:58.806 Nvme7n1 : 1.19 269.95 16.87 0.00 0.00 211877.38 16493.23 263891.63 00:31:58.806 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:58.806 Verification LBA range: start 0x0 length 0x400 00:31:58.806 Nvme8n1 : 1.18 324.22 20.26 0.00 0.00 173145.10 9666.56 227191.47 00:31:58.806 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:58.806 Verification LBA range: start 0x0 length 0x400 00:31:58.806 Nvme9n1 : 1.20 267.60 16.73 0.00 0.00 206507.78 11851.09 281367.89 00:31:58.806 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:58.806 Verification LBA range: start 0x0 length 0x400 00:31:58.806 Nvme10n1 : 1.18 217.61 13.60 0.00 0.00 248529.28 16384.00 269134.51 00:31:58.806 [2024-11-20T16:58:58.722Z] =================================================================================================================== 00:31:58.806 [2024-11-20T16:58:58.722Z] Total : 2471.25 154.45 0.00 0.00 235763.11 9666.56 281367.89 00:31:58.807 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:31:58.807 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:31:58.807 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:58.807 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:58.807 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:31:58.807 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:58.807 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:31:59.068 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:59.068 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:31:59.068 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:59.068 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:59.068 rmmod nvme_tcp 00:31:59.068 rmmod nvme_fabrics 00:31:59.068 rmmod nvme_keyring 00:31:59.068 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:59.068 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:31:59.068 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:31:59.068 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@513 -- # '[' -n 2809334 ']' 00:31:59.068 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # killprocess 2809334 00:31:59.068 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2809334 ']' 00:31:59.068 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2809334 00:31:59.068 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:31:59.068 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:59.068 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2809334 00:31:59.068 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:59.068 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:59.068 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2809334' 00:31:59.068 killing process with pid 2809334 00:31:59.069 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2809334 00:31:59.069 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2809334 00:31:59.329 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:59.329 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:59.329 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:59.329 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:31:59.329 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-save 00:31:59.329 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:59.329 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-restore 00:31:59.329 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:59.329 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:59.329 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.330 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.330 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.879 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:01.879 00:32:01.879 real 0m17.259s 00:32:01.879 user 0m35.787s 00:32:01.879 sys 0m7.036s 00:32:01.879 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:01.879 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:01.879 ************************************ 00:32:01.879 END TEST nvmf_shutdown_tc1 00:32:01.879 ************************************ 00:32:01.879 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:32:01.879 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:01.879 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:01.879 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:01.879 ************************************ 00:32:01.879 START TEST nvmf_shutdown_tc2 00:32:01.879 ************************************ 00:32:01.879 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:32:01.879 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:01.880 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:01.880 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:01.880 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:01.881 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:01.881 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # is_hw=yes 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:01.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:01.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:32:01.881 00:32:01.881 --- 10.0.0.2 ping statistics --- 00:32:01.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.881 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:01.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:01.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:32:01.881 00:32:01.881 --- 10.0.0.1 ping statistics --- 00:32:01.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.881 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # return 0 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # nvmfpid=2811326 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # waitforlisten 2811326 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2811326 ']' 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.881 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:01.882 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.882 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:01.882 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:01.882 [2024-11-20 17:59:01.665087] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:01.882 [2024-11-20 17:59:01.665156] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:01.882 [2024-11-20 17:59:01.750585] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:01.882 [2024-11-20 17:59:01.784641] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:01.882 [2024-11-20 17:59:01.784678] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:01.882 [2024-11-20 17:59:01.784684] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:01.882 [2024-11-20 17:59:01.784689] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:01.882 [2024-11-20 17:59:01.784693] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:01.882 [2024-11-20 17:59:01.784839] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:01.882 [2024-11-20 17:59:01.784996] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:01.882 [2024-11-20 17:59:01.785149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.882 [2024-11-20 17:59:01.785152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:32:02.825 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:02.825 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:32:02.825 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:02.825 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:02.826 [2024-11-20 17:59:02.509243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.826 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:02.826 Malloc1 00:32:02.826 [2024-11-20 17:59:02.607886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.826 Malloc2 00:32:02.826 Malloc3 00:32:02.826 Malloc4 00:32:02.826 Malloc5 00:32:03.087 Malloc6 00:32:03.087 Malloc7 00:32:03.087 Malloc8 00:32:03.087 Malloc9 00:32:03.087 Malloc10 00:32:03.087 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.087 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:32:03.087 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:03.087 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2811704 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2811704 /var/tmp/bdevperf.sock 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2811704 ']' 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:03.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # config=() 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # local subsystem config 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:03.348 { 00:32:03.348 "params": { 00:32:03.348 "name": "Nvme$subsystem", 00:32:03.348 "trtype": "$TEST_TRANSPORT", 00:32:03.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.348 "adrfam": "ipv4", 00:32:03.348 "trsvcid": "$NVMF_PORT", 00:32:03.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.348 "hdgst": ${hdgst:-false}, 00:32:03.348 "ddgst": ${ddgst:-false} 00:32:03.348 }, 00:32:03.348 "method": "bdev_nvme_attach_controller" 00:32:03.348 } 00:32:03.348 EOF 00:32:03.348 )") 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:03.348 { 00:32:03.348 "params": { 00:32:03.348 "name": "Nvme$subsystem", 00:32:03.348 "trtype": "$TEST_TRANSPORT", 00:32:03.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.348 "adrfam": "ipv4", 00:32:03.348 "trsvcid": "$NVMF_PORT", 00:32:03.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.348 "hdgst": ${hdgst:-false}, 00:32:03.348 "ddgst": ${ddgst:-false} 00:32:03.348 }, 00:32:03.348 "method": "bdev_nvme_attach_controller" 00:32:03.348 } 00:32:03.348 EOF 00:32:03.348 )") 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:03.348 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:03.348 { 00:32:03.348 "params": { 00:32:03.348 "name": "Nvme$subsystem", 00:32:03.348 "trtype": "$TEST_TRANSPORT", 00:32:03.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.348 "adrfam": "ipv4", 00:32:03.348 "trsvcid": "$NVMF_PORT", 00:32:03.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.348 "hdgst": ${hdgst:-false}, 00:32:03.348 "ddgst": ${ddgst:-false} 00:32:03.348 }, 00:32:03.348 "method": "bdev_nvme_attach_controller" 00:32:03.349 } 00:32:03.349 EOF 00:32:03.349 )") 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:03.349 { 00:32:03.349 "params": { 00:32:03.349 "name": "Nvme$subsystem", 00:32:03.349 "trtype": "$TEST_TRANSPORT", 00:32:03.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.349 "adrfam": "ipv4", 00:32:03.349 "trsvcid": "$NVMF_PORT", 00:32:03.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.349 "hdgst": ${hdgst:-false}, 00:32:03.349 "ddgst": ${ddgst:-false} 00:32:03.349 }, 00:32:03.349 "method": "bdev_nvme_attach_controller" 00:32:03.349 } 00:32:03.349 EOF 00:32:03.349 )") 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:03.349 { 00:32:03.349 "params": { 00:32:03.349 "name": "Nvme$subsystem", 00:32:03.349 "trtype": "$TEST_TRANSPORT", 00:32:03.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.349 "adrfam": "ipv4", 00:32:03.349 "trsvcid": "$NVMF_PORT", 00:32:03.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.349 "hdgst": ${hdgst:-false}, 00:32:03.349 "ddgst": ${ddgst:-false} 00:32:03.349 }, 00:32:03.349 "method": "bdev_nvme_attach_controller" 00:32:03.349 } 00:32:03.349 EOF 00:32:03.349 )") 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:03.349 { 00:32:03.349 "params": { 00:32:03.349 "name": "Nvme$subsystem", 00:32:03.349 "trtype": "$TEST_TRANSPORT", 00:32:03.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.349 "adrfam": "ipv4", 00:32:03.349 "trsvcid": "$NVMF_PORT", 00:32:03.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.349 "hdgst": ${hdgst:-false}, 00:32:03.349 "ddgst": ${ddgst:-false} 00:32:03.349 }, 00:32:03.349 "method": "bdev_nvme_attach_controller" 00:32:03.349 } 00:32:03.349 EOF 00:32:03.349 )") 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:32:03.349 [2024-11-20 17:59:03.053241] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:03.349 [2024-11-20 17:59:03.053296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2811704 ] 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:03.349 { 00:32:03.349 "params": { 00:32:03.349 "name": "Nvme$subsystem", 00:32:03.349 "trtype": "$TEST_TRANSPORT", 00:32:03.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.349 "adrfam": "ipv4", 00:32:03.349 "trsvcid": "$NVMF_PORT", 00:32:03.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.349 "hdgst": ${hdgst:-false}, 00:32:03.349 "ddgst": ${ddgst:-false} 00:32:03.349 }, 00:32:03.349 "method": "bdev_nvme_attach_controller" 00:32:03.349 } 00:32:03.349 EOF 00:32:03.349 )") 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:03.349 { 00:32:03.349 "params": { 00:32:03.349 "name": "Nvme$subsystem", 00:32:03.349 "trtype": "$TEST_TRANSPORT", 00:32:03.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.349 "adrfam": "ipv4", 00:32:03.349 "trsvcid": "$NVMF_PORT", 00:32:03.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.349 "hdgst": ${hdgst:-false}, 00:32:03.349 "ddgst": ${ddgst:-false} 00:32:03.349 }, 00:32:03.349 "method": "bdev_nvme_attach_controller" 00:32:03.349 } 00:32:03.349 EOF 00:32:03.349 )") 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:03.349 { 00:32:03.349 "params": { 00:32:03.349 "name": "Nvme$subsystem", 00:32:03.349 "trtype": "$TEST_TRANSPORT", 00:32:03.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.349 "adrfam": "ipv4", 00:32:03.349 "trsvcid": "$NVMF_PORT", 00:32:03.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.349 "hdgst": ${hdgst:-false}, 00:32:03.349 "ddgst": ${ddgst:-false} 00:32:03.349 }, 00:32:03.349 "method": "bdev_nvme_attach_controller" 00:32:03.349 } 00:32:03.349 EOF 00:32:03.349 )") 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:03.349 { 00:32:03.349 "params": { 00:32:03.349 "name": "Nvme$subsystem", 00:32:03.349 "trtype": "$TEST_TRANSPORT", 00:32:03.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.349 "adrfam": "ipv4", 00:32:03.349 "trsvcid": "$NVMF_PORT", 00:32:03.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.349 "hdgst": ${hdgst:-false}, 00:32:03.349 "ddgst": ${ddgst:-false} 00:32:03.349 }, 00:32:03.349 "method": "bdev_nvme_attach_controller" 00:32:03.349 } 00:32:03.349 EOF 00:32:03.349 )") 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # jq . 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@581 -- # IFS=, 00:32:03.349 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:32:03.349 "params": { 00:32:03.349 "name": "Nvme1", 00:32:03.349 "trtype": "tcp", 00:32:03.349 "traddr": "10.0.0.2", 00:32:03.349 "adrfam": "ipv4", 00:32:03.349 "trsvcid": "4420", 00:32:03.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:03.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:03.349 "hdgst": false, 00:32:03.349 "ddgst": false 00:32:03.349 }, 00:32:03.349 "method": "bdev_nvme_attach_controller" 00:32:03.349 },{ 00:32:03.349 "params": { 00:32:03.349 "name": "Nvme2", 00:32:03.349 "trtype": "tcp", 00:32:03.349 "traddr": "10.0.0.2", 00:32:03.349 "adrfam": "ipv4", 00:32:03.349 "trsvcid": "4420", 00:32:03.349 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:03.349 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:03.349 "hdgst": false, 00:32:03.349 "ddgst": false 00:32:03.349 }, 00:32:03.349 "method": "bdev_nvme_attach_controller" 00:32:03.349 },{ 00:32:03.349 "params": { 00:32:03.349 "name": "Nvme3", 00:32:03.349 "trtype": "tcp", 00:32:03.349 "traddr": "10.0.0.2", 00:32:03.349 "adrfam": "ipv4", 00:32:03.349 "trsvcid": "4420", 00:32:03.349 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:32:03.349 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:32:03.349 "hdgst": false, 00:32:03.349 "ddgst": false 00:32:03.349 }, 00:32:03.349 "method": "bdev_nvme_attach_controller" 00:32:03.349 },{ 00:32:03.349 "params": { 00:32:03.349 "name": "Nvme4", 00:32:03.349 "trtype": "tcp", 00:32:03.349 "traddr": "10.0.0.2", 00:32:03.349 "adrfam": "ipv4", 00:32:03.349 "trsvcid": "4420", 00:32:03.349 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:32:03.349 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:32:03.349 "hdgst": false, 00:32:03.349 "ddgst": false 00:32:03.349 }, 00:32:03.349 "method": "bdev_nvme_attach_controller" 00:32:03.349 },{ 00:32:03.349 "params": { 00:32:03.349 "name": "Nvme5", 00:32:03.349 "trtype": "tcp", 00:32:03.349 "traddr": "10.0.0.2", 00:32:03.349 "adrfam": "ipv4", 00:32:03.349 "trsvcid": "4420", 00:32:03.349 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:32:03.349 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:32:03.349 "hdgst": false, 00:32:03.349 "ddgst": false 00:32:03.349 }, 00:32:03.349 "method": "bdev_nvme_attach_controller" 00:32:03.349 },{ 00:32:03.349 "params": { 00:32:03.349 "name": "Nvme6", 00:32:03.349 "trtype": "tcp", 00:32:03.349 "traddr": "10.0.0.2", 00:32:03.349 "adrfam": "ipv4", 00:32:03.349 "trsvcid": "4420", 00:32:03.349 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:32:03.349 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:32:03.349 "hdgst": false, 00:32:03.350 "ddgst": false 00:32:03.350 }, 00:32:03.350 "method": "bdev_nvme_attach_controller" 00:32:03.350 },{ 00:32:03.350 "params": { 00:32:03.350 "name": "Nvme7", 00:32:03.350 "trtype": "tcp", 00:32:03.350 "traddr": "10.0.0.2", 00:32:03.350 "adrfam": "ipv4", 00:32:03.350 "trsvcid": "4420", 00:32:03.350 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:32:03.350 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:32:03.350 "hdgst": false, 00:32:03.350 "ddgst": false 00:32:03.350 }, 00:32:03.350 "method": "bdev_nvme_attach_controller" 00:32:03.350 },{ 00:32:03.350 "params": { 00:32:03.350 "name": "Nvme8", 00:32:03.350 "trtype": "tcp", 00:32:03.350 "traddr": "10.0.0.2", 00:32:03.350 "adrfam": "ipv4", 00:32:03.350 "trsvcid": "4420", 00:32:03.350 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:32:03.350 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:32:03.350 "hdgst": false, 00:32:03.350 "ddgst": false 00:32:03.350 }, 00:32:03.350 "method": "bdev_nvme_attach_controller" 00:32:03.350 },{ 00:32:03.350 "params": { 00:32:03.350 "name": "Nvme9", 00:32:03.350 "trtype": "tcp", 00:32:03.350 "traddr": "10.0.0.2", 00:32:03.350 "adrfam": "ipv4", 00:32:03.350 "trsvcid": "4420", 00:32:03.350 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:32:03.350 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:32:03.350 "hdgst": false, 00:32:03.350 "ddgst": false 00:32:03.350 }, 00:32:03.350 "method": "bdev_nvme_attach_controller" 00:32:03.350 },{ 00:32:03.350 "params": { 00:32:03.350 "name": "Nvme10", 00:32:03.350 "trtype": "tcp", 00:32:03.350 "traddr": "10.0.0.2", 00:32:03.350 "adrfam": "ipv4", 00:32:03.350 "trsvcid": "4420", 00:32:03.350 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:32:03.350 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:32:03.350 "hdgst": false, 00:32:03.350 "ddgst": false 00:32:03.350 }, 00:32:03.350 "method": "bdev_nvme_attach_controller" 00:32:03.350 }' 00:32:03.350 [2024-11-20 17:59:03.131861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.350 [2024-11-20 17:59:03.163988] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.267 Running I/O for 10 seconds... 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:32:05.267 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:32:05.267 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:32:05.267 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:32:05.267 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:32:05.267 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:32:05.267 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.267 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:05.529 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.529 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:32:05.529 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:32:05.529 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2811704 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2811704 ']' 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2811704 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2811704 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2811704' 00:32:05.791 killing process with pid 2811704 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2811704 00:32:05.791 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2811704 00:32:06.052 1921.00 IOPS, 120.06 MiB/s [2024-11-20T16:59:05.968Z] Received shutdown signal, test time was about 1.052494 seconds 00:32:06.052 00:32:06.052 Latency(us) 00:32:06.052 [2024-11-20T16:59:05.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.052 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:06.052 Verification LBA range: start 0x0 length 0x400 00:32:06.052 Nvme1n1 : 1.00 191.82 11.99 0.00 0.00 329908.34 21299.20 258648.75 00:32:06.052 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:06.053 Verification LBA range: start 0x0 length 0x400 00:32:06.053 Nvme2n1 : 1.05 243.47 15.22 0.00 0.00 255112.32 21408.43 249910.61 00:32:06.053 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:06.053 Verification LBA range: start 0x0 length 0x400 00:32:06.053 Nvme3n1 : 1.03 251.71 15.73 0.00 0.00 241627.45 2703.36 248162.99 00:32:06.053 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:06.053 Verification LBA range: start 0x0 length 0x400 00:32:06.053 Nvme4n1 : 1.03 248.86 15.55 0.00 0.00 239619.20 14527.15 255153.49 00:32:06.053 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:06.053 Verification LBA range: start 0x0 length 0x400 00:32:06.053 Nvme5n1 : 1.05 244.57 15.29 0.00 0.00 239454.51 18568.53 248162.99 00:32:06.053 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:06.053 Verification LBA range: start 0x0 length 0x400 00:32:06.053 Nvme6n1 : 1.04 246.50 15.41 0.00 0.00 232525.01 14308.69 249910.61 00:32:06.053 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:06.053 Verification LBA range: start 0x0 length 0x400 00:32:06.053 Nvme7n1 : 1.02 250.39 15.65 0.00 0.00 223725.87 15837.87 242920.11 00:32:06.053 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:06.053 Verification LBA range: start 0x0 length 0x400 00:32:06.053 Nvme8n1 : 1.04 245.84 15.36 0.00 0.00 223503.57 14199.47 258648.75 00:32:06.053 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:06.053 Verification LBA range: start 0x0 length 0x400 00:32:06.053 Nvme9n1 : 1.05 243.97 15.25 0.00 0.00 220671.15 16165.55 251658.24 00:32:06.053 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:06.053 Verification LBA range: start 0x0 length 0x400 00:32:06.053 Nvme10n1 : 1.02 188.37 11.77 0.00 0.00 271853.23 15728.64 270882.13 00:32:06.053 [2024-11-20T16:59:05.969Z] =================================================================================================================== 00:32:06.053 [2024-11-20T16:59:05.969Z] Total : 2355.50 147.22 0.00 0.00 245000.79 2703.36 270882.13 00:32:06.053 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2811326 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:07.053 rmmod nvme_tcp 00:32:07.053 rmmod nvme_fabrics 00:32:07.053 rmmod nvme_keyring 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@513 -- # '[' -n 2811326 ']' 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # killprocess 2811326 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2811326 ']' 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2811326 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:07.053 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2811326 00:32:07.314 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:07.314 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:07.314 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2811326' 00:32:07.314 killing process with pid 2811326 00:32:07.314 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2811326 00:32:07.314 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2811326 00:32:07.576 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:07.576 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:07.576 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:07.576 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:32:07.576 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:07.576 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-save 00:32:07.576 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-restore 00:32:07.576 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:07.576 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:07.576 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.576 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:07.576 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.508 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:09.508 00:32:09.508 real 0m8.089s 00:32:09.508 user 0m24.835s 00:32:09.508 sys 0m1.281s 00:32:09.508 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:09.508 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:09.508 ************************************ 00:32:09.508 END TEST nvmf_shutdown_tc2 00:32:09.508 ************************************ 00:32:09.508 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@171 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:32:09.508 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:09.508 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:09.508 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:09.508 ************************************ 00:32:09.508 START TEST nvmf_shutdown_tc3 00:32:09.508 ************************************ 00:32:09.508 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:32:09.508 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:09.509 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:09.510 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:09.511 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:09.511 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:09.511 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:09.511 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:09.511 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:09.511 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:09.511 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:09.511 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:09.511 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:09.511 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:09.511 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.511 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.511 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:09.511 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:09.511 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:09.511 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:09.511 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:09.512 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:09.512 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.512 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.512 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:09.512 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:09.512 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:09.512 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:09.512 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:09.512 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.512 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:09.512 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.512 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:09.512 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:09.512 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.512 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:09.512 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:09.512 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.512 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:09.513 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # is_hw=yes 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:09.513 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.514 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:09.514 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:09.514 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:09.514 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:09.780 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:09.780 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:09.780 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:09.780 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:09.780 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:09.780 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:09.780 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:09.780 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:09.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:32:09.780 00:32:09.780 --- 10.0.0.2 ping statistics --- 00:32:09.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.780 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:32:09.780 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:09.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:32:09.780 00:32:09.780 --- 10.0.0.1 ping statistics --- 00:32:09.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.780 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:32:09.780 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:10.041 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # return 0 00:32:10.041 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:10.041 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:10.041 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:10.041 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:10.041 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:10.041 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:10.042 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:10.042 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:32:10.042 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:10.042 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:10.042 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:10.042 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # nvmfpid=2813111 00:32:10.042 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # waitforlisten 2813111 00:32:10.042 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:32:10.042 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2813111 ']' 00:32:10.042 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.042 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:10.042 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.042 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:10.042 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:10.042 [2024-11-20 17:59:09.807355] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:10.042 [2024-11-20 17:59:09.807420] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:10.042 [2024-11-20 17:59:09.892571] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:10.042 [2024-11-20 17:59:09.926620] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:10.042 [2024-11-20 17:59:09.926660] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:10.042 [2024-11-20 17:59:09.926666] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:10.042 [2024-11-20 17:59:09.926670] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:10.042 [2024-11-20 17:59:09.926675] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:10.042 [2024-11-20 17:59:09.926821] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:10.042 [2024-11-20 17:59:09.926972] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:10.042 [2024-11-20 17:59:09.927125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.042 [2024-11-20 17:59:09.927127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:10.984 [2024-11-20 17:59:10.658435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:10.984 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:10.985 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:10.985 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:10.985 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:10.985 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:10.985 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:10.985 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:32:10.985 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.985 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:10.985 Malloc1 00:32:10.985 [2024-11-20 17:59:10.756953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:10.985 Malloc2 00:32:10.985 Malloc3 00:32:10.985 Malloc4 00:32:10.985 Malloc5 00:32:11.246 Malloc6 00:32:11.246 Malloc7 00:32:11.246 Malloc8 00:32:11.246 Malloc9 00:32:11.246 Malloc10 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2813327 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2813327 /var/tmp/bdevperf.sock 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2813327 ']' 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:11.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # config=() 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # local subsystem config 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:11.246 { 00:32:11.246 "params": { 00:32:11.246 "name": "Nvme$subsystem", 00:32:11.246 "trtype": "$TEST_TRANSPORT", 00:32:11.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.246 "adrfam": "ipv4", 00:32:11.246 "trsvcid": "$NVMF_PORT", 00:32:11.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.246 "hdgst": ${hdgst:-false}, 00:32:11.246 "ddgst": ${ddgst:-false} 00:32:11.246 }, 00:32:11.246 "method": "bdev_nvme_attach_controller" 00:32:11.246 } 00:32:11.246 EOF 00:32:11.246 )") 00:32:11.246 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:11.507 { 00:32:11.507 "params": { 00:32:11.507 "name": "Nvme$subsystem", 00:32:11.507 "trtype": "$TEST_TRANSPORT", 00:32:11.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.507 "adrfam": "ipv4", 00:32:11.507 "trsvcid": "$NVMF_PORT", 00:32:11.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.507 "hdgst": ${hdgst:-false}, 00:32:11.507 "ddgst": ${ddgst:-false} 00:32:11.507 }, 00:32:11.507 "method": "bdev_nvme_attach_controller" 00:32:11.507 } 00:32:11.507 EOF 00:32:11.507 )") 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:11.507 { 00:32:11.507 "params": { 00:32:11.507 "name": "Nvme$subsystem", 00:32:11.507 "trtype": "$TEST_TRANSPORT", 00:32:11.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.507 "adrfam": "ipv4", 00:32:11.507 "trsvcid": "$NVMF_PORT", 00:32:11.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.507 "hdgst": ${hdgst:-false}, 00:32:11.507 "ddgst": ${ddgst:-false} 00:32:11.507 }, 00:32:11.507 "method": "bdev_nvme_attach_controller" 00:32:11.507 } 00:32:11.507 EOF 00:32:11.507 )") 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:11.507 { 00:32:11.507 "params": { 00:32:11.507 "name": "Nvme$subsystem", 00:32:11.507 "trtype": "$TEST_TRANSPORT", 00:32:11.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.507 "adrfam": "ipv4", 00:32:11.507 "trsvcid": "$NVMF_PORT", 00:32:11.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.507 "hdgst": ${hdgst:-false}, 00:32:11.507 "ddgst": ${ddgst:-false} 00:32:11.507 }, 00:32:11.507 "method": "bdev_nvme_attach_controller" 00:32:11.507 } 00:32:11.507 EOF 00:32:11.507 )") 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:11.507 { 00:32:11.507 "params": { 00:32:11.507 "name": "Nvme$subsystem", 00:32:11.507 "trtype": "$TEST_TRANSPORT", 00:32:11.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.507 "adrfam": "ipv4", 00:32:11.507 "trsvcid": "$NVMF_PORT", 00:32:11.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.507 "hdgst": ${hdgst:-false}, 00:32:11.507 "ddgst": ${ddgst:-false} 00:32:11.507 }, 00:32:11.507 "method": "bdev_nvme_attach_controller" 00:32:11.507 } 00:32:11.507 EOF 00:32:11.507 )") 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:11.507 { 00:32:11.507 "params": { 00:32:11.507 "name": "Nvme$subsystem", 00:32:11.507 "trtype": "$TEST_TRANSPORT", 00:32:11.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.507 "adrfam": "ipv4", 00:32:11.507 "trsvcid": "$NVMF_PORT", 00:32:11.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.507 "hdgst": ${hdgst:-false}, 00:32:11.507 "ddgst": ${ddgst:-false} 00:32:11.507 }, 00:32:11.507 "method": "bdev_nvme_attach_controller" 00:32:11.507 } 00:32:11.507 EOF 00:32:11.507 )") 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:32:11.507 [2024-11-20 17:59:11.197906] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:11.507 [2024-11-20 17:59:11.197959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2813327 ] 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:11.507 { 00:32:11.507 "params": { 00:32:11.507 "name": "Nvme$subsystem", 00:32:11.507 "trtype": "$TEST_TRANSPORT", 00:32:11.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.507 "adrfam": "ipv4", 00:32:11.507 "trsvcid": "$NVMF_PORT", 00:32:11.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.507 "hdgst": ${hdgst:-false}, 00:32:11.507 "ddgst": ${ddgst:-false} 00:32:11.507 }, 00:32:11.507 "method": "bdev_nvme_attach_controller" 00:32:11.507 } 00:32:11.507 EOF 00:32:11.507 )") 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:11.507 { 00:32:11.507 "params": { 00:32:11.507 "name": "Nvme$subsystem", 00:32:11.507 "trtype": "$TEST_TRANSPORT", 00:32:11.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.507 "adrfam": "ipv4", 00:32:11.507 "trsvcid": "$NVMF_PORT", 00:32:11.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.507 "hdgst": ${hdgst:-false}, 00:32:11.507 "ddgst": ${ddgst:-false} 00:32:11.507 }, 00:32:11.507 "method": "bdev_nvme_attach_controller" 00:32:11.507 } 00:32:11.507 EOF 00:32:11.507 )") 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:11.507 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:11.507 { 00:32:11.507 "params": { 00:32:11.507 "name": "Nvme$subsystem", 00:32:11.507 "trtype": "$TEST_TRANSPORT", 00:32:11.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.507 "adrfam": "ipv4", 00:32:11.507 "trsvcid": "$NVMF_PORT", 00:32:11.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.507 "hdgst": ${hdgst:-false}, 00:32:11.507 "ddgst": ${ddgst:-false} 00:32:11.507 }, 00:32:11.507 "method": "bdev_nvme_attach_controller" 00:32:11.507 } 00:32:11.507 EOF 00:32:11.507 )") 00:32:11.508 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:32:11.508 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:32:11.508 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:32:11.508 { 00:32:11.508 "params": { 00:32:11.508 "name": "Nvme$subsystem", 00:32:11.508 "trtype": "$TEST_TRANSPORT", 00:32:11.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.508 "adrfam": "ipv4", 00:32:11.508 "trsvcid": "$NVMF_PORT", 00:32:11.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.508 "hdgst": ${hdgst:-false}, 00:32:11.508 "ddgst": ${ddgst:-false} 00:32:11.508 }, 00:32:11.508 "method": "bdev_nvme_attach_controller" 00:32:11.508 } 00:32:11.508 EOF 00:32:11.508 )") 00:32:11.508 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:32:11.508 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # jq . 00:32:11.508 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@581 -- # IFS=, 00:32:11.508 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:32:11.508 "params": { 00:32:11.508 "name": "Nvme1", 00:32:11.508 "trtype": "tcp", 00:32:11.508 "traddr": "10.0.0.2", 00:32:11.508 "adrfam": "ipv4", 00:32:11.508 "trsvcid": "4420", 00:32:11.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:11.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:11.508 "hdgst": false, 00:32:11.508 "ddgst": false 00:32:11.508 }, 00:32:11.508 "method": "bdev_nvme_attach_controller" 00:32:11.508 },{ 00:32:11.508 "params": { 00:32:11.508 "name": "Nvme2", 00:32:11.508 "trtype": "tcp", 00:32:11.508 "traddr": "10.0.0.2", 00:32:11.508 "adrfam": "ipv4", 00:32:11.508 "trsvcid": "4420", 00:32:11.508 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:11.508 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:11.508 "hdgst": false, 00:32:11.508 "ddgst": false 00:32:11.508 }, 00:32:11.508 "method": "bdev_nvme_attach_controller" 00:32:11.508 },{ 00:32:11.508 "params": { 00:32:11.508 "name": "Nvme3", 00:32:11.508 "trtype": "tcp", 00:32:11.508 "traddr": "10.0.0.2", 00:32:11.508 "adrfam": "ipv4", 00:32:11.508 "trsvcid": "4420", 00:32:11.508 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:32:11.508 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:32:11.508 "hdgst": false, 00:32:11.508 "ddgst": false 00:32:11.508 }, 00:32:11.508 "method": "bdev_nvme_attach_controller" 00:32:11.508 },{ 00:32:11.508 "params": { 00:32:11.508 "name": "Nvme4", 00:32:11.508 "trtype": "tcp", 00:32:11.508 "traddr": "10.0.0.2", 00:32:11.508 "adrfam": "ipv4", 00:32:11.508 "trsvcid": "4420", 00:32:11.508 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:32:11.508 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:32:11.508 "hdgst": false, 00:32:11.508 "ddgst": false 00:32:11.508 }, 00:32:11.508 "method": "bdev_nvme_attach_controller" 00:32:11.508 },{ 00:32:11.508 "params": { 00:32:11.508 "name": "Nvme5", 00:32:11.508 "trtype": "tcp", 00:32:11.508 "traddr": "10.0.0.2", 00:32:11.508 "adrfam": "ipv4", 00:32:11.508 "trsvcid": "4420", 00:32:11.508 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:32:11.508 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:32:11.508 "hdgst": false, 00:32:11.508 "ddgst": false 00:32:11.508 }, 00:32:11.508 "method": "bdev_nvme_attach_controller" 00:32:11.508 },{ 00:32:11.508 "params": { 00:32:11.508 "name": "Nvme6", 00:32:11.508 "trtype": "tcp", 00:32:11.508 "traddr": "10.0.0.2", 00:32:11.508 "adrfam": "ipv4", 00:32:11.508 "trsvcid": "4420", 00:32:11.508 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:32:11.508 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:32:11.508 "hdgst": false, 00:32:11.508 "ddgst": false 00:32:11.508 }, 00:32:11.508 "method": "bdev_nvme_attach_controller" 00:32:11.508 },{ 00:32:11.508 "params": { 00:32:11.508 "name": "Nvme7", 00:32:11.508 "trtype": "tcp", 00:32:11.508 "traddr": "10.0.0.2", 00:32:11.508 "adrfam": "ipv4", 00:32:11.508 "trsvcid": "4420", 00:32:11.508 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:32:11.508 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:32:11.508 "hdgst": false, 00:32:11.508 "ddgst": false 00:32:11.508 }, 00:32:11.508 "method": "bdev_nvme_attach_controller" 00:32:11.508 },{ 00:32:11.508 "params": { 00:32:11.508 "name": "Nvme8", 00:32:11.508 "trtype": "tcp", 00:32:11.508 "traddr": "10.0.0.2", 00:32:11.508 "adrfam": "ipv4", 00:32:11.508 "trsvcid": "4420", 00:32:11.508 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:32:11.508 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:32:11.508 "hdgst": false, 00:32:11.508 "ddgst": false 00:32:11.508 }, 00:32:11.508 "method": "bdev_nvme_attach_controller" 00:32:11.508 },{ 00:32:11.508 "params": { 00:32:11.508 "name": "Nvme9", 00:32:11.508 "trtype": "tcp", 00:32:11.508 "traddr": "10.0.0.2", 00:32:11.508 "adrfam": "ipv4", 00:32:11.508 "trsvcid": "4420", 00:32:11.508 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:32:11.508 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:32:11.508 "hdgst": false, 00:32:11.508 "ddgst": false 00:32:11.508 }, 00:32:11.508 "method": "bdev_nvme_attach_controller" 00:32:11.508 },{ 00:32:11.508 "params": { 00:32:11.508 "name": "Nvme10", 00:32:11.508 "trtype": "tcp", 00:32:11.508 "traddr": "10.0.0.2", 00:32:11.508 "adrfam": "ipv4", 00:32:11.508 "trsvcid": "4420", 00:32:11.508 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:32:11.508 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:32:11.508 "hdgst": false, 00:32:11.508 "ddgst": false 00:32:11.508 }, 00:32:11.508 "method": "bdev_nvme_attach_controller" 00:32:11.508 }' 00:32:11.508 [2024-11-20 17:59:11.274816] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.508 [2024-11-20 17:59:11.305992] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.891 Running I/O for 10 seconds... 00:32:13.152 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:13.152 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:32:13.152 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:13.152 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.152 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:13.152 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.152 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:13.152 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:32:13.152 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:13.152 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:32:13.152 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:32:13.152 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:32:13.152 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:32:13.152 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:32:13.152 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:32:13.152 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:32:13.152 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.152 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:13.152 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.152 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:32:13.152 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:32:13.152 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:32:13.413 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:32:13.413 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:32:13.413 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:32:13.413 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:32:13.413 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.413 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:13.673 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.674 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:32:13.674 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:32:13.674 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:32:13.949 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:32:13.949 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:32:13.949 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2813111 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2813111 ']' 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2813111 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2813111 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2813111' 00:32:13.950 killing process with pid 2813111 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2813111 00:32:13.950 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2813111 00:32:13.950 [2024-11-20 17:59:13.719853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.719918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.719924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.719935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.719940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.719945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.719949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.719954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.719959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.719964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.719969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.719973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.719978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.719982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.719987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.719992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.719997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.950 [2024-11-20 17:59:13.720215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.720220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca100 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.721560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccccd0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.722187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.951 [2024-11-20 17:59:13.722222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.951 [2024-11-20 17:59:13.722233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.951 [2024-11-20 17:59:13.722241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.951 [2024-11-20 17:59:13.722251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.951 [2024-11-20 17:59:13.722259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.951 [2024-11-20 17:59:13.722267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.951 [2024-11-20 17:59:13.722274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.951 [2024-11-20 17:59:13.722282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7cc0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.722610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.722626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.722632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.722640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.722645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.722650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.722655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.722662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.722670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.951 [2024-11-20 17:59:13.722676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.722897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca5f0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.723924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccaac0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.725971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.725994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.952 [2024-11-20 17:59:13.726173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1[2024-11-20 17:59:13.726189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1[2024-11-20 17:59:13.726223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 17:59:13.726250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 17:59:13.726271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with [2024-11-20 17:59:13.726301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128the state(6) to be set 00:32:13.953 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccafb0 is same with the state(6) to be set 00:32:13.953 [2024-11-20 17:59:13.726329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.953 [2024-11-20 17:59:13.726665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.953 [2024-11-20 17:59:13.726672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.726985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.726994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.727001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.727011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.727019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.727029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.727038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.727048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.727055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.727064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.727072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.727081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.727089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.727099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.727107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.727117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.727125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.727134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.727142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.954 [2024-11-20 17:59:13.727151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.954 [2024-11-20 17:59:13.727162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.955 [2024-11-20 17:59:13.727173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.955 [2024-11-20 17:59:13.727180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.955 [2024-11-20 17:59:13.727190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.955 [2024-11-20 17:59:13.727197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.955 [2024-11-20 17:59:13.727207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.955 [2024-11-20 17:59:13.727215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.955 [2024-11-20 17:59:13.727225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.955 [2024-11-20 17:59:13.727233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.955 [2024-11-20 17:59:13.727242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.955 [2024-11-20 17:59:13.727250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.955 [2024-11-20 17:59:13.727261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.955 [2024-11-20 17:59:13.727269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.955 [2024-11-20 17:59:13.727279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.955 [2024-11-20 17:59:13.727286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.955 [2024-11-20 17:59:13.727296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.955 [2024-11-20 17:59:13.727303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.955 [2024-11-20 17:59:13.727314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.955 [2024-11-20 17:59:13.727321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.955 [2024-11-20 17:59:13.727369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with [2024-11-20 17:59:13.727376] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe61f00 was disconnected and frethe state(6) to be set 00:32:13.955 ed. reset controller. 00:32:13.955 [2024-11-20 17:59:13.727388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb480 is same with the state(6) to be set 00:32:13.955 [2024-11-20 17:59:13.727895] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:13.956 [2024-11-20 17:59:13.728623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb950 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.728640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb950 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.728645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb950 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.728650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb950 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.728654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb950 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.728659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb950 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.728664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb950 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.728669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb950 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.728674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb950 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.728681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccb950 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccbe20 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729608] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:13.956 [2024-11-20 17:59:13.729619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:32:13.956 [2024-11-20 17:59:13.729641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d1df0 (9): Bad file descriptor 00:32:13.956 [2024-11-20 17:59:13.729681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.956 [2024-11-20 17:59:13.729839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.956 [2024-11-20 17:59:13.729850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc310 is same with the state(6) to be set 00:32:13.956 [2024-11-20 17:59:13.729861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.956 [2024-11-20 17:59:13.729870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.956 [2024-11-20 17:59:13.729880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.956 [2024-11-20 17:59:13.729887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.729902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.729910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.729920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.729927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.729937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.729944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.729954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.729965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.729975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.729982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.729992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.729999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with [2024-11-20 17:59:13.730308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:32:13.957 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.957 [2024-11-20 17:59:13.730321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.957 [2024-11-20 17:59:13.730329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 17:59:13.730330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 the state(6) to be set 00:32:13.957 [2024-11-20 17:59:13.730337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.957 [2024-11-20 17:59:13.730340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:12[2024-11-20 17:59:13.730341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 the state(6) to be set 00:32:13.957 [2024-11-20 17:59:13.730349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.957 [2024-11-20 17:59:13.730357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.957 [2024-11-20 17:59:13.730359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.957 [2024-11-20 17:59:13.730367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with [2024-11-20 17:59:13.730367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:32:13.957 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.957 [2024-11-20 17:59:13.730379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with [2024-11-20 17:59:13.730379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:12the state(6) to be set 00:32:13.957 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.957 [2024-11-20 17:59:13.730388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 [2024-11-20 17:59:13.730392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.957 [2024-11-20 17:59:13.730397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.957 [2024-11-20 17:59:13.730399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.957 [2024-11-20 17:59:13.730402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.957 [2024-11-20 17:59:13.730406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 17:59:13.730407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.957 the state(6) to be set 00:32:13.957 [2024-11-20 17:59:13.730415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.958 [2024-11-20 17:59:13.730420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with [2024-11-20 17:59:13.730426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:32:13.958 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.958 [2024-11-20 17:59:13.730434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:12[2024-11-20 17:59:13.730440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.958 the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with [2024-11-20 17:59:13.730447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:32:13.958 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.958 [2024-11-20 17:59:13.730454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:12[2024-11-20 17:59:13.730459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.958 the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.958 [2024-11-20 17:59:13.730471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with [2024-11-20 17:59:13.730481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:12the state(6) to be set 00:32:13.958 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.958 [2024-11-20 17:59:13.730488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.958 [2024-11-20 17:59:13.730493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.958 [2024-11-20 17:59:13.730504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 17:59:13.730509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.958 the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.958 [2024-11-20 17:59:13.730521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.958 [2024-11-20 17:59:13.730532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.958 [2024-11-20 17:59:13.730543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 17:59:13.730548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.958 the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.958 [2024-11-20 17:59:13.730561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.958 [2024-11-20 17:59:13.730572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.958 [2024-11-20 17:59:13.730582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.958 [2024-11-20 17:59:13.730587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.958 [2024-11-20 17:59:13.730598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.958 [2024-11-20 17:59:13.730609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.958 [2024-11-20 17:59:13.730619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.958 [2024-11-20 17:59:13.730625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.958 [2024-11-20 17:59:13.730636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 17:59:13.730642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.958 the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.958 [2024-11-20 17:59:13.730654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.958 [2024-11-20 17:59:13.730665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccc7e0 is same with the state(6) to be set 00:32:13.958 [2024-11-20 17:59:13.730670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.958 [2024-11-20 17:59:13.730678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.958 [2024-11-20 17:59:13.730688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.958 [2024-11-20 17:59:13.730695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.958 [2024-11-20 17:59:13.730704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.958 [2024-11-20 17:59:13.730711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.958 [2024-11-20 17:59:13.730721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.958 [2024-11-20 17:59:13.730728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.730737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.959 [2024-11-20 17:59:13.730745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.730754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.959 [2024-11-20 17:59:13.730761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.730770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.959 [2024-11-20 17:59:13.730778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.730787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.959 [2024-11-20 17:59:13.730796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.730805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.959 [2024-11-20 17:59:13.730812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.730822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.959 [2024-11-20 17:59:13.730829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.730838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.959 [2024-11-20 17:59:13.730845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.730855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.959 [2024-11-20 17:59:13.730862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.730871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.959 [2024-11-20 17:59:13.730878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.730888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.959 [2024-11-20 17:59:13.730895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.730904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.959 [2024-11-20 17:59:13.730912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.730921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.959 [2024-11-20 17:59:13.730928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.730938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.959 [2024-11-20 17:59:13.730945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.730954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.959 [2024-11-20 17:59:13.730961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.731007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.959 [2024-11-20 17:59:13.731052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.731149] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe65af0 was disconnected and freed. reset controller. 00:32:13.959 [2024-11-20 17:59:13.731325] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:13.959 [2024-11-20 17:59:13.733579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:32:13.959 [2024-11-20 17:59:13.733632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe19630 (9): Bad file descriptor 00:32:13.959 [2024-11-20 17:59:13.733869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.959 [2024-11-20 17:59:13.733883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d1df0 with addr=10.0.0.2, port=4420 00:32:13.959 [2024-11-20 17:59:13.733891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1df0 is same with the state(6) to be set 00:32:13.959 [2024-11-20 17:59:13.733923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.959 [2024-11-20 17:59:13.733934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.733943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.959 [2024-11-20 17:59:13.733950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.733959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.959 [2024-11-20 17:59:13.733966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.733974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.959 [2024-11-20 17:59:13.733981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.733988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9a60 is same with the state(6) to be set 00:32:13.959 [2024-11-20 17:59:13.734012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.959 [2024-11-20 17:59:13.734020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.734029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.959 [2024-11-20 17:59:13.734036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.734045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.959 [2024-11-20 17:59:13.734053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.734060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.959 [2024-11-20 17:59:13.734067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.734074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0bde0 is same with the state(6) to be set 00:32:13.959 [2024-11-20 17:59:13.734100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.959 [2024-11-20 17:59:13.734108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.734117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.959 [2024-11-20 17:59:13.749181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.749237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.959 [2024-11-20 17:59:13.749247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.749256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.959 [2024-11-20 17:59:13.749263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.959 [2024-11-20 17:59:13.749272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe13500 is same with the state(6) to be set 00:32:13.959 [2024-11-20 17:59:13.749353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.960 [2024-11-20 17:59:13.749363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.960 [2024-11-20 17:59:13.749379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.960 [2024-11-20 17:59:13.749395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.960 [2024-11-20 17:59:13.749410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0cca0 is same with the state(6) to be set 00:32:13.960 [2024-11-20 17:59:13.749448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.960 [2024-11-20 17:59:13.749457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.960 [2024-11-20 17:59:13.749472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.960 [2024-11-20 17:59:13.749488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.960 [2024-11-20 17:59:13.749504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x908610 is same with the state(6) to be set 00:32:13.960 [2024-11-20 17:59:13.749538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.960 [2024-11-20 17:59:13.749547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.960 [2024-11-20 17:59:13.749569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.960 [2024-11-20 17:59:13.749584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.960 [2024-11-20 17:59:13.749600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf71d0 is same with the state(6) to be set 00:32:13.960 [2024-11-20 17:59:13.749634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.960 [2024-11-20 17:59:13.749642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.960 [2024-11-20 17:59:13.749659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.960 [2024-11-20 17:59:13.749675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.960 [2024-11-20 17:59:13.749690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ad20 is same with the state(6) to be set 00:32:13.960 [2024-11-20 17:59:13.749717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d7cc0 (9): Bad file descriptor 00:32:13.960 [2024-11-20 17:59:13.749781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.749792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.749818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.749836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.749854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.749873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.749891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.749908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.749924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.749941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.749957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.749974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.749984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.749991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.750000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.750008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.750017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.750024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.750034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.750041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.750050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.750057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.750068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.750075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.750087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.750094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.750104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.750111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.750120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.750127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.750137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.960 [2024-11-20 17:59:13.750144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.960 [2024-11-20 17:59:13.750153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.961 [2024-11-20 17:59:13.750819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.961 [2024-11-20 17:59:13.750827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.750836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.750843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.750853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.750860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.750869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.750876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.750885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9db7c0 is same with the state(6) to be set 00:32:13.962 [2024-11-20 17:59:13.750947] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9db7c0 was disconnected and freed. reset controller. 00:32:13.962 [2024-11-20 17:59:13.751029] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:13.962 [2024-11-20 17:59:13.751342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.751364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.751377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.751385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.751398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.751406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.751416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.751423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.751433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.751440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.751449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddfa20 is same with the state(6) to be set 00:32:13.962 [2024-11-20 17:59:13.751490] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xddfa20 was disconnected and freed. reset controller. 00:32:13.962 [2024-11-20 17:59:13.751636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d1df0 (9): Bad file descriptor 00:32:13.962 [2024-11-20 17:59:13.751672] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:13.962 [2024-11-20 17:59:13.751684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c9a60 (9): Bad file descriptor 00:32:13.962 [2024-11-20 17:59:13.751698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0bde0 (9): Bad file descriptor 00:32:13.962 [2024-11-20 17:59:13.751717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe13500 (9): Bad file descriptor 00:32:13.962 [2024-11-20 17:59:13.751729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0cca0 (9): Bad file descriptor 00:32:13.962 [2024-11-20 17:59:13.751742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x908610 (9): Bad file descriptor 00:32:13.962 [2024-11-20 17:59:13.751754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf71d0 (9): Bad file descriptor 00:32:13.962 [2024-11-20 17:59:13.751772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x105ad20 (9): Bad file descriptor 00:32:13.962 [2024-11-20 17:59:13.753055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.962 [2024-11-20 17:59:13.753452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.962 [2024-11-20 17:59:13.753459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.753990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.753997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.754007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.754014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.754023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.754032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.963 [2024-11-20 17:59:13.754042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.963 [2024-11-20 17:59:13.754049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.754058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.754065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.754074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.754081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.754091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.754098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.754107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.754115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.754124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.754131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.754141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.754148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.754161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.754169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.754179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.754186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.754236] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe645c0 was disconnected and freed. reset controller. 00:32:13.964 [2024-11-20 17:59:13.755519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:32:13.964 [2024-11-20 17:59:13.755892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.964 [2024-11-20 17:59:13.755908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe19630 with addr=10.0.0.2, port=4420 00:32:13.964 [2024-11-20 17:59:13.755918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe19630 is same with the state(6) to be set 00:32:13.964 [2024-11-20 17:59:13.755929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:32:13.964 [2024-11-20 17:59:13.755937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:32:13.964 [2024-11-20 17:59:13.755949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:32:13.964 [2024-11-20 17:59:13.755987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.755996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.964 [2024-11-20 17:59:13.756386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.964 [2024-11-20 17:59:13.756394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.756990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.965 [2024-11-20 17:59:13.756997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.965 [2024-11-20 17:59:13.757007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.757014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.757024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.757031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.757040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.757049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.757059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.757066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.757074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9da600 is same with the state(6) to be set 00:32:13.966 [2024-11-20 17:59:13.759590] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:13.966 [2024-11-20 17:59:13.759641] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:13.966 [2024-11-20 17:59:13.759659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:32:13.966 [2024-11-20 17:59:13.759674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.966 [2024-11-20 17:59:13.759683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.966 [2024-11-20 17:59:13.759693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:32:13.966 [2024-11-20 17:59:13.759963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.966 [2024-11-20 17:59:13.759978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c9a60 with addr=10.0.0.2, port=4420 00:32:13.966 [2024-11-20 17:59:13.759986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9a60 is same with the state(6) to be set 00:32:13.966 [2024-11-20 17:59:13.759997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe19630 (9): Bad file descriptor 00:32:13.966 [2024-11-20 17:59:13.760658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.966 [2024-11-20 17:59:13.760672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x908610 with addr=10.0.0.2, port=4420 00:32:13.966 [2024-11-20 17:59:13.760680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x908610 is same with the state(6) to be set 00:32:13.966 [2024-11-20 17:59:13.760982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.966 [2024-11-20 17:59:13.760991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d7cc0 with addr=10.0.0.2, port=4420 00:32:13.966 [2024-11-20 17:59:13.760999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7cc0 is same with the state(6) to be set 00:32:13.966 [2024-11-20 17:59:13.761429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.966 [2024-11-20 17:59:13.761469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf71d0 with addr=10.0.0.2, port=4420 00:32:13.966 [2024-11-20 17:59:13.761481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf71d0 is same with the state(6) to be set 00:32:13.966 [2024-11-20 17:59:13.761497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c9a60 (9): Bad file descriptor 00:32:13.966 [2024-11-20 17:59:13.761508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:32:13.966 [2024-11-20 17:59:13.761515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:32:13.966 [2024-11-20 17:59:13.761525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:32:13.966 [2024-11-20 17:59:13.762094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.966 [2024-11-20 17:59:13.762108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x908610 (9): Bad file descriptor 00:32:13.966 [2024-11-20 17:59:13.762123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d7cc0 (9): Bad file descriptor 00:32:13.966 [2024-11-20 17:59:13.762133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf71d0 (9): Bad file descriptor 00:32:13.966 [2024-11-20 17:59:13.762141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:32:13.966 [2024-11-20 17:59:13.762147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:32:13.966 [2024-11-20 17:59:13.762154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:32:13.966 [2024-11-20 17:59:13.762265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.966 [2024-11-20 17:59:13.762302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:32:13.966 [2024-11-20 17:59:13.762309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:32:13.966 [2024-11-20 17:59:13.762316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:32:13.966 [2024-11-20 17:59:13.762327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.966 [2024-11-20 17:59:13.762334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.966 [2024-11-20 17:59:13.762341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.966 [2024-11-20 17:59:13.762352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:32:13.966 [2024-11-20 17:59:13.762358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:32:13.966 [2024-11-20 17:59:13.762365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:32:13.966 [2024-11-20 17:59:13.762415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.762426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.762440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.762448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.762458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.762466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.762475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.762482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.762492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.762499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.762509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.762516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.762526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.762536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.762546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.762553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.762563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.762571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.762580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.762588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.762598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.762605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.762615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.762622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.762631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.762639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.762648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.966 [2024-11-20 17:59:13.762656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.966 [2024-11-20 17:59:13.762665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.762990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.762998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.967 [2024-11-20 17:59:13.763319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.967 [2024-11-20 17:59:13.763326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.763336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.763343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.763353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.763360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.763369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.763376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.763386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.763397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.763407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.763414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.763424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.763431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.763440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.763447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.763457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.763464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.763474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.763481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.763490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.763498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.763507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.763514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.763523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe630b0 is same with the state(6) to be set 00:32:13.968 [2024-11-20 17:59:13.764810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.764824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.764838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.764847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.764859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.764868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.764879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.764888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.764899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.764910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.764920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.764927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.764937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.764944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.764953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.764961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.764971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.764978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.764988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.764995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.765004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.765011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.765021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.765028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.765038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.765045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.765055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.765062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.765072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.765079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.765088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.765096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.765105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.765113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.765124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.765131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.765141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.765148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.968 [2024-11-20 17:59:13.765164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.968 [2024-11-20 17:59:13.765172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.969 [2024-11-20 17:59:13.765807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.969 [2024-11-20 17:59:13.765816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.765824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.765833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.765841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.765850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.765858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.765868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.765876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.765885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.765893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.765902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.765909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.765919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.765926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.765935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde0fa0 is same with the state(6) to be set 00:32:13.970 [2024-11-20 17:59:13.767219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.970 [2024-11-20 17:59:13.767749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.970 [2024-11-20 17:59:13.767756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.767766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.767773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.767782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.767790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.767799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.767807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.767816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.767823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.767833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.767840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.767850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.767857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.767867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.767874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.767883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.767891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.767901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.767908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.767918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.767925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.767934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.767943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.767953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.767960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.767970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.767977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.767986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.767994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.768341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.768350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde23e0 is same with the state(6) to be set 00:32:13.971 [2024-11-20 17:59:13.769858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.769877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.769893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.769901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.769910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.769918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.971 [2024-11-20 17:59:13.769928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.971 [2024-11-20 17:59:13.769935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.769945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.769952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.769962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.769969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.769979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.769986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.769996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.972 [2024-11-20 17:59:13.770583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.972 [2024-11-20 17:59:13.770590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:13.973 [2024-11-20 17:59:13.770966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.973 [2024-11-20 17:59:13.770975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde3960 is same with the state(6) to be set 00:32:13.973 [2024-11-20 17:59:13.772461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:32:13.973 [2024-11-20 17:59:13.772483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:32:13.973 [2024-11-20 17:59:13.772493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.973 [2024-11-20 17:59:13.772500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.973 [2024-11-20 17:59:13.772506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.973 [2024-11-20 17:59:13.772513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:32:13.973 [2024-11-20 17:59:13.772587] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:13.973 [2024-11-20 17:59:13.772601] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:13.973 [2024-11-20 17:59:13.772613] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:13.973 [2024-11-20 17:59:13.772670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:32:13.973 [2024-11-20 17:59:13.772681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:32:13.973 task offset: 32128 on job bdev=Nvme3n1 fails 00:32:13.973 00:32:13.973 Latency(us) 00:32:13.973 [2024-11-20T16:59:13.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.973 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:13.973 Job: Nvme1n1 ended in about 0.96 seconds with error 00:32:13.973 Verification LBA range: start 0x0 length 0x400 00:32:13.973 Nvme1n1 : 0.96 133.97 8.37 66.99 0.00 314905.32 17803.95 234181.97 00:32:13.973 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:13.973 Job: Nvme2n1 ended in about 0.95 seconds with error 00:32:13.973 Verification LBA range: start 0x0 length 0x400 00:32:13.973 Nvme2n1 : 0.95 145.24 9.08 67.36 0.00 291441.62 16602.45 251658.24 00:32:13.973 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:13.973 Job: Nvme3n1 ended in about 0.93 seconds with error 00:32:13.973 Verification LBA range: start 0x0 length 0x400 00:32:13.973 Nvme3n1 : 0.93 207.25 12.95 69.08 0.00 219133.17 4724.05 249910.61 00:32:13.973 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:13.973 Job: Nvme4n1 ended in about 0.96 seconds with error 00:32:13.973 Verification LBA range: start 0x0 length 0x400 00:32:13.973 Nvme4n1 : 0.96 199.62 12.48 66.54 0.00 223139.95 9338.88 253405.87 00:32:13.973 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:13.973 Job: Nvme5n1 ended in about 0.96 seconds with error 00:32:13.973 Verification LBA range: start 0x0 length 0x400 00:32:13.973 Nvme5n1 : 0.96 200.70 12.54 66.90 0.00 217071.79 7427.41 241172.48 00:32:13.973 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:13.973 Job: Nvme6n1 ended in about 0.93 seconds with error 00:32:13.974 Verification LBA range: start 0x0 length 0x400 00:32:13.974 Nvme6n1 : 0.93 206.44 12.90 68.81 0.00 205576.59 2498.56 253405.87 00:32:13.974 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:13.974 Job: Nvme7n1 ended in about 0.95 seconds with error 00:32:13.974 Verification LBA range: start 0x0 length 0x400 00:32:13.974 Nvme7n1 : 0.95 201.56 12.60 5.25 0.00 261806.89 19223.89 255153.49 00:32:13.974 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:13.974 Job: Nvme8n1 ended in about 0.96 seconds with error 00:32:13.974 Verification LBA range: start 0x0 length 0x400 00:32:13.974 Nvme8n1 : 0.96 199.12 12.44 66.37 0.00 204368.64 17476.27 253405.87 00:32:13.974 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:13.974 Job: Nvme9n1 ended in about 0.97 seconds with error 00:32:13.974 Verification LBA range: start 0x0 length 0x400 00:32:13.974 Nvme9n1 : 0.97 132.42 8.28 66.21 0.00 266957.94 39321.60 260396.37 00:32:13.974 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:13.974 Job: Nvme10n1 ended in about 0.97 seconds with error 00:32:13.974 Verification LBA range: start 0x0 length 0x400 00:32:13.974 Nvme10n1 : 0.97 132.06 8.25 66.03 0.00 261328.78 28180.48 270882.13 00:32:13.974 [2024-11-20T16:59:13.890Z] =================================================================================================================== 00:32:13.974 [2024-11-20T16:59:13.890Z] Total : 1758.38 109.90 609.54 0.00 242163.33 2498.56 270882.13 00:32:13.974 [2024-11-20 17:59:13.798968] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:13.974 [2024-11-20 17:59:13.799001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:32:13.974 [2024-11-20 17:59:13.799350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.974 [2024-11-20 17:59:13.799368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d1df0 with addr=10.0.0.2, port=4420 00:32:13.974 [2024-11-20 17:59:13.799377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1df0 is same with the state(6) to be set 00:32:13.974 [2024-11-20 17:59:13.799676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.974 [2024-11-20 17:59:13.799686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe19630 with addr=10.0.0.2, port=4420 00:32:13.974 [2024-11-20 17:59:13.799693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe19630 is same with the state(6) to be set 00:32:13.974 [2024-11-20 17:59:13.799988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.974 [2024-11-20 17:59:13.799998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x105ad20 with addr=10.0.0.2, port=4420 00:32:13.974 [2024-11-20 17:59:13.800006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ad20 is same with the state(6) to be set 00:32:13.974 [2024-11-20 17:59:13.801066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:32:13.974 [2024-11-20 17:59:13.801079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:32:13.974 [2024-11-20 17:59:13.801089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.974 [2024-11-20 17:59:13.801420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.974 [2024-11-20 17:59:13.801434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0bde0 with addr=10.0.0.2, port=4420 00:32:13.974 [2024-11-20 17:59:13.801441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0bde0 is same with the state(6) to be set 00:32:13.974 [2024-11-20 17:59:13.801693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.974 [2024-11-20 17:59:13.801703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0cca0 with addr=10.0.0.2, port=4420 00:32:13.974 [2024-11-20 17:59:13.801710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0cca0 is same with the state(6) to be set 00:32:13.974 [2024-11-20 17:59:13.802043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.974 [2024-11-20 17:59:13.802052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe13500 with addr=10.0.0.2, port=4420 00:32:13.974 [2024-11-20 17:59:13.802059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe13500 is same with the state(6) to be set 00:32:13.974 [2024-11-20 17:59:13.802071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d1df0 (9): Bad file descriptor 00:32:13.974 [2024-11-20 17:59:13.802087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe19630 (9): Bad file descriptor 00:32:13.974 [2024-11-20 17:59:13.802096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x105ad20 (9): Bad file descriptor 00:32:13.974 [2024-11-20 17:59:13.802121] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:13.974 [2024-11-20 17:59:13.802137] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:13.974 [2024-11-20 17:59:13.802148] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:13.974 [2024-11-20 17:59:13.802177] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:13.974 [2024-11-20 17:59:13.802455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:32:13.974 [2024-11-20 17:59:13.802754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.974 [2024-11-20 17:59:13.802768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c9a60 with addr=10.0.0.2, port=4420 00:32:13.974 [2024-11-20 17:59:13.802775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9a60 is same with the state(6) to be set 00:32:13.974 [2024-11-20 17:59:13.802977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.974 [2024-11-20 17:59:13.802987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf71d0 with addr=10.0.0.2, port=4420 00:32:13.974 [2024-11-20 17:59:13.802994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf71d0 is same with the state(6) to be set 00:32:13.974 [2024-11-20 17:59:13.803305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.974 [2024-11-20 17:59:13.803315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d7cc0 with addr=10.0.0.2, port=4420 00:32:13.974 [2024-11-20 17:59:13.803322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d7cc0 is same with the state(6) to be set 00:32:13.974 [2024-11-20 17:59:13.803331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0bde0 (9): Bad file descriptor 00:32:13.974 [2024-11-20 17:59:13.803341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0cca0 (9): Bad file descriptor 00:32:13.974 [2024-11-20 17:59:13.803350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe13500 (9): Bad file descriptor 00:32:13.974 [2024-11-20 17:59:13.803359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:32:13.974 [2024-11-20 17:59:13.803365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:32:13.974 [2024-11-20 17:59:13.803373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:32:13.974 [2024-11-20 17:59:13.803384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:32:13.974 [2024-11-20 17:59:13.803390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:32:13.974 [2024-11-20 17:59:13.803397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:32:13.974 [2024-11-20 17:59:13.803407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:32:13.974 [2024-11-20 17:59:13.803413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:32:13.974 [2024-11-20 17:59:13.803420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:32:13.974 [2024-11-20 17:59:13.803486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.974 [2024-11-20 17:59:13.803498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.974 [2024-11-20 17:59:13.803505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.974 [2024-11-20 17:59:13.803819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.974 [2024-11-20 17:59:13.803829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x908610 with addr=10.0.0.2, port=4420 00:32:13.974 [2024-11-20 17:59:13.803837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x908610 is same with the state(6) to be set 00:32:13.974 [2024-11-20 17:59:13.803846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c9a60 (9): Bad file descriptor 00:32:13.974 [2024-11-20 17:59:13.803856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf71d0 (9): Bad file descriptor 00:32:13.974 [2024-11-20 17:59:13.803865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d7cc0 (9): Bad file descriptor 00:32:13.974 [2024-11-20 17:59:13.803873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:32:13.974 [2024-11-20 17:59:13.803879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:32:13.974 [2024-11-20 17:59:13.803886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:32:13.974 [2024-11-20 17:59:13.803895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:32:13.974 [2024-11-20 17:59:13.803902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:32:13.974 [2024-11-20 17:59:13.803909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:32:13.974 [2024-11-20 17:59:13.803918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:32:13.974 [2024-11-20 17:59:13.803924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:32:13.974 [2024-11-20 17:59:13.803931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:32:13.974 [2024-11-20 17:59:13.803961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.974 [2024-11-20 17:59:13.803968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.974 [2024-11-20 17:59:13.803974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.974 [2024-11-20 17:59:13.803982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x908610 (9): Bad file descriptor 00:32:13.974 [2024-11-20 17:59:13.803990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:32:13.974 [2024-11-20 17:59:13.803996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:32:13.974 [2024-11-20 17:59:13.804003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:32:13.975 [2024-11-20 17:59:13.804012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:32:13.975 [2024-11-20 17:59:13.804019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:32:13.975 [2024-11-20 17:59:13.804026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:32:13.975 [2024-11-20 17:59:13.804035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.975 [2024-11-20 17:59:13.804041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.975 [2024-11-20 17:59:13.804048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.975 [2024-11-20 17:59:13.804078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.975 [2024-11-20 17:59:13.804088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.975 [2024-11-20 17:59:13.804094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.975 [2024-11-20 17:59:13.804100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:32:13.975 [2024-11-20 17:59:13.804107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:32:13.975 [2024-11-20 17:59:13.804114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:32:13.975 [2024-11-20 17:59:13.804141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.235 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # nvmfpid= 00:32:14.235 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # sleep 1 00:32:15.177 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@143 -- # kill -9 2813327 00:32:15.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 143: kill: (2813327) - No such process 00:32:15.177 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@143 -- # true 00:32:15.177 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@145 -- # stoptarget 00:32:15.177 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:32:15.177 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:15.177 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:15.177 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:32:15.177 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:15.177 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:15.177 rmmod nvme_tcp 00:32:15.177 rmmod nvme_fabrics 00:32:15.177 rmmod nvme_keyring 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-save 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-restore 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:15.177 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:17.725 00:32:17.725 real 0m7.790s 00:32:17.725 user 0m19.077s 00:32:17.725 sys 0m1.280s 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:17.725 ************************************ 00:32:17.725 END TEST nvmf_shutdown_tc3 00:32:17.725 ************************************ 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@173 -- # [[ e810 == \e\8\1\0 ]] 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@173 -- # [[ tcp == \r\d\m\a ]] 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@174 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:17.725 ************************************ 00:32:17.725 START TEST nvmf_shutdown_tc4 00:32:17.725 ************************************ 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # starttarget 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:32:17.725 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:17.726 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:17.726 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:17.726 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:17.726 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # is_hw=yes 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:17.726 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:17.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:17.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:32:17.726 00:32:17.726 --- 10.0.0.2 ping statistics --- 00:32:17.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.726 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:17.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:17.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:32:17.727 00:32:17.727 --- 10.0.0.1 ping statistics --- 00:32:17.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.727 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # return 0 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # nvmfpid=2814635 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # waitforlisten 2814635 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 2814635 ']' 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:17.727 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:17.987 [2024-11-20 17:59:17.655992] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:17.987 [2024-11-20 17:59:17.656058] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:17.987 [2024-11-20 17:59:17.743850] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:17.987 [2024-11-20 17:59:17.777590] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:17.987 [2024-11-20 17:59:17.777626] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:17.987 [2024-11-20 17:59:17.777632] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:17.987 [2024-11-20 17:59:17.777637] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:17.987 [2024-11-20 17:59:17.777641] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:17.987 [2024-11-20 17:59:17.777792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:17.987 [2024-11-20 17:59:17.777949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:17.987 [2024-11-20 17:59:17.778076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.987 [2024-11-20 17:59:17.778079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:32:18.558 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:18.558 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:32:18.558 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:18.558 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:18.558 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:18.818 [2024-11-20 17:59:18.498584] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:18.818 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:18.819 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:32:18.819 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.819 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:18.819 Malloc1 00:32:18.819 [2024-11-20 17:59:18.601202] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:18.819 Malloc2 00:32:18.819 Malloc3 00:32:18.819 Malloc4 00:32:19.079 Malloc5 00:32:19.079 Malloc6 00:32:19.079 Malloc7 00:32:19.079 Malloc8 00:32:19.079 Malloc9 00:32:19.079 Malloc10 00:32:19.079 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.079 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:32:19.079 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:19.079 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:19.340 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@154 -- # perfpid=2815011 00:32:19.340 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # sleep 5 00:32:19.340 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@153 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:32:19.340 [2024-11-20 17:59:19.067200] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:24.630 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@157 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:24.630 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@160 -- # killprocess 2814635 00:32:24.630 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 2814635 ']' 00:32:24.630 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 2814635 00:32:24.630 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:32:24.630 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:24.630 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2814635 00:32:24.630 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:24.630 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:24.630 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2814635' 00:32:24.630 killing process with pid 2814635 00:32:24.630 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 2814635 00:32:24.630 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 2814635 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 starting I/O failed: -6 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 starting I/O failed: -6 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 starting I/O failed: -6 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 starting I/O failed: -6 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 starting I/O failed: -6 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 Write completed with error (sct=0, sc=8) 00:32:24.630 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 [2024-11-20 17:59:24.086887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.631 starting I/O failed: -6 00:32:24.631 starting I/O failed: -6 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 [2024-11-20 17:59:24.087879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:24.631 [2024-11-20 17:59:24.087914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaab3a0 is same with the state(6) to be set 00:32:24.631 [2024-11-20 17:59:24.087943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaab3a0 is same with the state(6) to be set 00:32:24.631 [2024-11-20 17:59:24.087949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaab3a0 is same with the state(6) to be set 00:32:24.631 [2024-11-20 17:59:24.087955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaab3a0 is same with the state(6) to be set 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 [2024-11-20 17:59:24.088800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.631 starting I/O failed: -6 00:32:24.631 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 [2024-11-20 17:59:24.089302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd18590 is same with tstarting I/O failed: -6 00:32:24.632 he state(6) to be set 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 [2024-11-20 17:59:24.089340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd18590 is same with the state(6) to be set 00:32:24.632 starting I/O failed: -6 00:32:24.632 [2024-11-20 17:59:24.089347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd18590 is same with the state(6) to be set 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 [2024-11-20 17:59:24.089352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd18590 is same with the state(6) to be set 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 [2024-11-20 17:59:24.089572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd18a60 is same with the state(6) to be set 00:32:24.632 starting I/O failed: -6 00:32:24.632 [2024-11-20 17:59:24.089588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd18a60 is same with the state(6) to be set 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 [2024-11-20 17:59:24.089594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd18a60 is same with the state(6) to be set 00:32:24.632 starting I/O failed: -6 00:32:24.632 [2024-11-20 17:59:24.089600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd18a60 is same with the state(6) to be set 00:32:24.632 [2024-11-20 17:59:24.089605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd18a60 is same with the state(6) to be set 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 [2024-11-20 17:59:24.089870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd18f30 is same with the state(6) to be set 00:32:24.632 [2024-11-20 17:59:24.089893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd18f30 is same with the state(6) to be set 00:32:24.632 [2024-11-20 17:59:24.089899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd18f30 is same with the state(6) to be set 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 [2024-11-20 17:59:24.090209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.632 NVMe io qpair process completion error 00:32:24.632 [2024-11-20 17:59:24.090403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd180c0 is same with the state(6) to be set 00:32:24.632 [2024-11-20 17:59:24.090426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd180c0 is same with the state(6) to be set 00:32:24.632 [2024-11-20 17:59:24.090432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd180c0 is same with the state(6) to be set 00:32:24.632 [2024-11-20 17:59:24.090437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd180c0 is same with the state(6) to be set 00:32:24.632 [2024-11-20 17:59:24.090442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd180c0 is same with the state(6) to be set 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 [2024-11-20 17:59:24.091604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.632 starting I/O failed: -6 00:32:24.632 Write completed with error (sct=0, sc=8) 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 [2024-11-20 17:59:24.092425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 [2024-11-20 17:59:24.093365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.633 starting I/O failed: -6 00:32:24.633 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 [2024-11-20 17:59:24.095209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:24.634 NVMe io qpair process completion error 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 [2024-11-20 17:59:24.096380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.634 starting I/O failed: -6 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 [2024-11-20 17:59:24.097360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.634 Write completed with error (sct=0, sc=8) 00:32:24.634 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 [2024-11-20 17:59:24.098288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 [2024-11-20 17:59:24.100073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:24.635 NVMe io qpair process completion error 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 starting I/O failed: -6 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.635 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 [2024-11-20 17:59:24.101410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 [2024-11-20 17:59:24.102237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 [2024-11-20 17:59:24.103150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.636 Write completed with error (sct=0, sc=8) 00:32:24.636 starting I/O failed: -6 00:32:24.637 [2024-11-20 17:59:24.104960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:24.637 NVMe io qpair process completion error 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 [2024-11-20 17:59:24.106069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.637 starting I/O failed: -6 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 [2024-11-20 17:59:24.107001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 Write completed with error (sct=0, sc=8) 00:32:24.637 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 [2024-11-20 17:59:24.108106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 [2024-11-20 17:59:24.109513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:24.638 NVMe io qpair process completion error 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 Write completed with error (sct=0, sc=8) 00:32:24.638 starting I/O failed: -6 00:32:24.639 [2024-11-20 17:59:24.110631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 [2024-11-20 17:59:24.111452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.639 starting I/O failed: -6 00:32:24.639 starting I/O failed: -6 00:32:24.639 starting I/O failed: -6 00:32:24.639 starting I/O failed: -6 00:32:24.639 starting I/O failed: -6 00:32:24.639 starting I/O failed: -6 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 [2024-11-20 17:59:24.112793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.639 Write completed with error (sct=0, sc=8) 00:32:24.639 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 [2024-11-20 17:59:24.116173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.640 NVMe io qpair process completion error 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 [2024-11-20 17:59:24.117283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.640 starting I/O failed: -6 00:32:24.640 starting I/O failed: -6 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 starting I/O failed: -6 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.640 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 [2024-11-20 17:59:24.118280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 [2024-11-20 17:59:24.119181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.641 Write completed with error (sct=0, sc=8) 00:32:24.641 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 [2024-11-20 17:59:24.120799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.642 NVMe io qpair process completion error 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 [2024-11-20 17:59:24.121935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 Write completed with error (sct=0, sc=8) 00:32:24.642 starting I/O failed: -6 00:32:24.642 [2024-11-20 17:59:24.122768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 [2024-11-20 17:59:24.123701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.643 starting I/O failed: -6 00:32:24.643 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 [2024-11-20 17:59:24.127155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.644 NVMe io qpair process completion error 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 [2024-11-20 17:59:24.128289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 [2024-11-20 17:59:24.129190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.644 Write completed with error (sct=0, sc=8) 00:32:24.644 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 [2024-11-20 17:59:24.130088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 [2024-11-20 17:59:24.131522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.645 NVMe io qpair process completion error 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 starting I/O failed: -6 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.645 Write completed with error (sct=0, sc=8) 00:32:24.646 [2024-11-20 17:59:24.132667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 [2024-11-20 17:59:24.133509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 [2024-11-20 17:59:24.134434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.646 starting I/O failed: -6 00:32:24.646 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 Write completed with error (sct=0, sc=8) 00:32:24.647 starting I/O failed: -6 00:32:24.647 [2024-11-20 17:59:24.139010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:24.647 NVMe io qpair process completion error 00:32:24.647 Initializing NVMe Controllers 00:32:24.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:24.647 Controller IO queue size 128, less than required. 00:32:24.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:24.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:32:24.647 Controller IO queue size 128, less than required. 00:32:24.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:24.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:32:24.647 Controller IO queue size 128, less than required. 00:32:24.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:24.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:32:24.647 Controller IO queue size 128, less than required. 00:32:24.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:24.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:32:24.647 Controller IO queue size 128, less than required. 00:32:24.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:24.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:32:24.647 Controller IO queue size 128, less than required. 00:32:24.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:24.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:32:24.647 Controller IO queue size 128, less than required. 00:32:24.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:24.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:32:24.647 Controller IO queue size 128, less than required. 00:32:24.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:24.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:32:24.647 Controller IO queue size 128, less than required. 00:32:24.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:24.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:32:24.647 Controller IO queue size 128, less than required. 00:32:24.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:24.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:24.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:32:24.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:32:24.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:32:24.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:32:24.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:32:24.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:32:24.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:32:24.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:32:24.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:32:24.647 Initialization complete. Launching workers. 00:32:24.647 ======================================================== 00:32:24.647 Latency(us) 00:32:24.647 Device Information : IOPS MiB/s Average min max 00:32:24.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1919.22 82.47 66712.53 700.58 118483.24 00:32:24.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1911.25 82.12 67016.68 683.09 152432.84 00:32:24.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1917.50 82.39 66824.52 726.13 122518.15 00:32:24.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1898.77 81.59 67507.38 761.18 122343.19 00:32:24.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1928.69 82.87 66508.23 705.45 122795.18 00:32:24.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1898.98 81.60 67572.37 511.95 121310.10 00:32:24.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1881.33 80.84 68252.87 826.01 133317.58 00:32:24.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1903.50 81.79 67477.15 822.34 121532.54 00:32:24.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1905.66 81.88 66697.91 677.58 121265.62 00:32:24.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1898.34 81.57 66978.25 663.18 120959.30 00:32:24.647 ======================================================== 00:32:24.647 Total : 19063.23 819.12 67151.80 511.95 152432.84 00:32:24.647 00:32:24.647 [2024-11-20 17:59:24.143633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dbe40 is same with the state(6) to be set 00:32:24.647 [2024-11-20 17:59:24.143676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62080 is same with the state(6) to be set 00:32:24.647 [2024-11-20 17:59:24.143706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa58280 is same with the state(6) to be set 00:32:24.648 [2024-11-20 17:59:24.143735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc7d0 is same with the state(6) to be set 00:32:24.648 [2024-11-20 17:59:24.143764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa53370 is same with the state(6) to be set 00:32:24.648 [2024-11-20 17:59:24.143795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5d180 is same with the state(6) to be set 00:32:24.648 [2024-11-20 17:59:24.143825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6be80 is same with the state(6) to be set 00:32:24.648 [2024-11-20 17:59:24.143853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66f80 is same with the state(6) to be set 00:32:24.648 [2024-11-20 17:59:24.143880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc4a0 is same with the state(6) to be set 00:32:24.648 [2024-11-20 17:59:24.143908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc170 is same with the state(6) to be set 00:32:24.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:32:24.648 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@161 -- # nvmfpid= 00:32:24.648 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@164 -- # sleep 1 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@165 -- # wait 2815011 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@165 -- # true 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@166 -- # stoptarget 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:25.589 rmmod nvme_tcp 00:32:25.589 rmmod nvme_fabrics 00:32:25.589 rmmod nvme_keyring 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-save 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-restore 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.589 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:28.133 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:28.133 00:32:28.133 real 0m10.292s 00:32:28.133 user 0m27.979s 00:32:28.133 sys 0m4.028s 00:32:28.133 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:28.133 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:28.133 ************************************ 00:32:28.133 END TEST nvmf_shutdown_tc4 00:32:28.133 ************************************ 00:32:28.133 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@177 -- # trap - SIGINT SIGTERM EXIT 00:32:28.133 00:32:28.133 real 0m43.884s 00:32:28.133 user 1m47.883s 00:32:28.133 sys 0m13.922s 00:32:28.133 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:28.134 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:28.134 ************************************ 00:32:28.134 END TEST nvmf_shutdown 00:32:28.134 ************************************ 00:32:28.134 17:59:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:32:28.134 00:32:28.134 real 19m33.498s 00:32:28.134 user 51m41.467s 00:32:28.134 sys 4m46.092s 00:32:28.134 17:59:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:28.134 17:59:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:28.134 ************************************ 00:32:28.134 END TEST nvmf_target_extra 00:32:28.134 ************************************ 00:32:28.134 17:59:27 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:32:28.134 17:59:27 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:28.134 17:59:27 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:28.134 17:59:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:28.134 ************************************ 00:32:28.134 START TEST nvmf_host 00:32:28.134 ************************************ 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:32:28.134 * Looking for test storage... 00:32:28.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:28.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.134 --rc genhtml_branch_coverage=1 00:32:28.134 --rc genhtml_function_coverage=1 00:32:28.134 --rc genhtml_legend=1 00:32:28.134 --rc geninfo_all_blocks=1 00:32:28.134 --rc geninfo_unexecuted_blocks=1 00:32:28.134 00:32:28.134 ' 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:28.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.134 --rc genhtml_branch_coverage=1 00:32:28.134 --rc genhtml_function_coverage=1 00:32:28.134 --rc genhtml_legend=1 00:32:28.134 --rc geninfo_all_blocks=1 00:32:28.134 --rc geninfo_unexecuted_blocks=1 00:32:28.134 00:32:28.134 ' 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:28.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.134 --rc genhtml_branch_coverage=1 00:32:28.134 --rc genhtml_function_coverage=1 00:32:28.134 --rc genhtml_legend=1 00:32:28.134 --rc geninfo_all_blocks=1 00:32:28.134 --rc geninfo_unexecuted_blocks=1 00:32:28.134 00:32:28.134 ' 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:28.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.134 --rc genhtml_branch_coverage=1 00:32:28.134 --rc genhtml_function_coverage=1 00:32:28.134 --rc genhtml_legend=1 00:32:28.134 --rc geninfo_all_blocks=1 00:32:28.134 --rc geninfo_unexecuted_blocks=1 00:32:28.134 00:32:28.134 ' 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:32:28.134 17:59:27 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:28.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.135 ************************************ 00:32:28.135 START TEST nvmf_multicontroller 00:32:28.135 ************************************ 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:32:28.135 * Looking for test storage... 00:32:28.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:32:28.135 17:59:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:28.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.396 --rc genhtml_branch_coverage=1 00:32:28.396 --rc genhtml_function_coverage=1 00:32:28.396 --rc genhtml_legend=1 00:32:28.396 --rc geninfo_all_blocks=1 00:32:28.396 --rc geninfo_unexecuted_blocks=1 00:32:28.396 00:32:28.396 ' 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:28.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.396 --rc genhtml_branch_coverage=1 00:32:28.396 --rc genhtml_function_coverage=1 00:32:28.396 --rc genhtml_legend=1 00:32:28.396 --rc geninfo_all_blocks=1 00:32:28.396 --rc geninfo_unexecuted_blocks=1 00:32:28.396 00:32:28.396 ' 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:28.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.396 --rc genhtml_branch_coverage=1 00:32:28.396 --rc genhtml_function_coverage=1 00:32:28.396 --rc genhtml_legend=1 00:32:28.396 --rc geninfo_all_blocks=1 00:32:28.396 --rc geninfo_unexecuted_blocks=1 00:32:28.396 00:32:28.396 ' 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:28.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.396 --rc genhtml_branch_coverage=1 00:32:28.396 --rc genhtml_function_coverage=1 00:32:28.396 --rc genhtml_legend=1 00:32:28.396 --rc geninfo_all_blocks=1 00:32:28.396 --rc geninfo_unexecuted_blocks=1 00:32:28.396 00:32:28.396 ' 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:28.396 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:28.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:32:28.397 17:59:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:36.540 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.540 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:36.541 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:36.541 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:36.541 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # is_hw=yes 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:36.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:36.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:32:36.541 00:32:36.541 --- 10.0.0.2 ping statistics --- 00:32:36.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.541 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:36.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:36.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:32:36.541 00:32:36.541 --- 10.0.0.1 ping statistics --- 00:32:36.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.541 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # return 0 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=2820366 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 2820366 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2820366 ']' 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:36.541 17:59:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:36.541 [2024-11-20 17:59:35.654679] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:36.541 [2024-11-20 17:59:35.654748] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.541 [2024-11-20 17:59:35.743758] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:36.542 [2024-11-20 17:59:35.791287] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.542 [2024-11-20 17:59:35.791342] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.542 [2024-11-20 17:59:35.791350] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.542 [2024-11-20 17:59:35.791357] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.542 [2024-11-20 17:59:35.791363] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.542 [2024-11-20 17:59:35.791518] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:36.542 [2024-11-20 17:59:35.791676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.542 [2024-11-20 17:59:35.791677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:36.803 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:36.803 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:32:36.803 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:36.803 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:36.803 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:36.803 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:36.803 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:36.803 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:36.804 [2024-11-20 17:59:36.526884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:36.804 Malloc0 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:36.804 [2024-11-20 17:59:36.606807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:36.804 [2024-11-20 17:59:36.618688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:36.804 Malloc1 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2820619 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2820619 /var/tmp/bdevperf.sock 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2820619 ']' 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:36.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:36.804 17:59:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:37.747 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:37.747 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:32:37.747 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:32:37.747 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.747 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:38.009 NVMe0n1 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.009 1 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:38.009 request: 00:32:38.009 { 00:32:38.009 "name": "NVMe0", 00:32:38.009 "trtype": "tcp", 00:32:38.009 "traddr": "10.0.0.2", 00:32:38.009 "adrfam": "ipv4", 00:32:38.009 "trsvcid": "4420", 00:32:38.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:38.009 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:32:38.009 "hostaddr": "10.0.0.1", 00:32:38.009 "prchk_reftag": false, 00:32:38.009 "prchk_guard": false, 00:32:38.009 "hdgst": false, 00:32:38.009 "ddgst": false, 00:32:38.009 "allow_unrecognized_csi": false, 00:32:38.009 "method": "bdev_nvme_attach_controller", 00:32:38.009 "req_id": 1 00:32:38.009 } 00:32:38.009 Got JSON-RPC error response 00:32:38.009 response: 00:32:38.009 { 00:32:38.009 "code": -114, 00:32:38.009 "message": "A controller named NVMe0 already exists with the specified network path" 00:32:38.009 } 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:38.009 request: 00:32:38.009 { 00:32:38.009 "name": "NVMe0", 00:32:38.009 "trtype": "tcp", 00:32:38.009 "traddr": "10.0.0.2", 00:32:38.009 "adrfam": "ipv4", 00:32:38.009 "trsvcid": "4420", 00:32:38.009 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:38.009 "hostaddr": "10.0.0.1", 00:32:38.009 "prchk_reftag": false, 00:32:38.009 "prchk_guard": false, 00:32:38.009 "hdgst": false, 00:32:38.009 "ddgst": false, 00:32:38.009 "allow_unrecognized_csi": false, 00:32:38.009 "method": "bdev_nvme_attach_controller", 00:32:38.009 "req_id": 1 00:32:38.009 } 00:32:38.009 Got JSON-RPC error response 00:32:38.009 response: 00:32:38.009 { 00:32:38.009 "code": -114, 00:32:38.009 "message": "A controller named NVMe0 already exists with the specified network path" 00:32:38.009 } 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:38.009 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:38.010 request: 00:32:38.010 { 00:32:38.010 "name": "NVMe0", 00:32:38.010 "trtype": "tcp", 00:32:38.010 "traddr": "10.0.0.2", 00:32:38.010 "adrfam": "ipv4", 00:32:38.010 "trsvcid": "4420", 00:32:38.010 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:38.010 "hostaddr": "10.0.0.1", 00:32:38.010 "prchk_reftag": false, 00:32:38.010 "prchk_guard": false, 00:32:38.010 "hdgst": false, 00:32:38.010 "ddgst": false, 00:32:38.010 "multipath": "disable", 00:32:38.010 "allow_unrecognized_csi": false, 00:32:38.010 "method": "bdev_nvme_attach_controller", 00:32:38.010 "req_id": 1 00:32:38.010 } 00:32:38.010 Got JSON-RPC error response 00:32:38.010 response: 00:32:38.010 { 00:32:38.010 "code": -114, 00:32:38.010 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:32:38.010 } 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:38.010 request: 00:32:38.010 { 00:32:38.010 "name": "NVMe0", 00:32:38.010 "trtype": "tcp", 00:32:38.010 "traddr": "10.0.0.2", 00:32:38.010 "adrfam": "ipv4", 00:32:38.010 "trsvcid": "4420", 00:32:38.010 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:38.010 "hostaddr": "10.0.0.1", 00:32:38.010 "prchk_reftag": false, 00:32:38.010 "prchk_guard": false, 00:32:38.010 "hdgst": false, 00:32:38.010 "ddgst": false, 00:32:38.010 "multipath": "failover", 00:32:38.010 "allow_unrecognized_csi": false, 00:32:38.010 "method": "bdev_nvme_attach_controller", 00:32:38.010 "req_id": 1 00:32:38.010 } 00:32:38.010 Got JSON-RPC error response 00:32:38.010 response: 00:32:38.010 { 00:32:38.010 "code": -114, 00:32:38.010 "message": "A controller named NVMe0 already exists with the specified network path" 00:32:38.010 } 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.010 17:59:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:38.322 00:32:38.322 17:59:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.322 17:59:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:38.322 17:59:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.322 17:59:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:38.322 17:59:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.322 17:59:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:32:38.322 17:59:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.322 17:59:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:38.637 00:32:38.637 17:59:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.637 17:59:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:38.637 17:59:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:32:38.637 17:59:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.638 17:59:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:38.638 17:59:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.638 17:59:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:32:38.638 17:59:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:39.579 { 00:32:39.579 "results": [ 00:32:39.579 { 00:32:39.579 "job": "NVMe0n1", 00:32:39.579 "core_mask": "0x1", 00:32:39.579 "workload": "write", 00:32:39.579 "status": "finished", 00:32:39.579 "queue_depth": 128, 00:32:39.579 "io_size": 4096, 00:32:39.579 "runtime": 1.007875, 00:32:39.579 "iops": 28571.99553516061, 00:32:39.579 "mibps": 111.60935755922114, 00:32:39.579 "io_failed": 0, 00:32:39.579 "io_timeout": 0, 00:32:39.579 "avg_latency_us": 4471.667932076258, 00:32:39.579 "min_latency_us": 2102.6133333333332, 00:32:39.579 "max_latency_us": 10813.44 00:32:39.579 } 00:32:39.579 ], 00:32:39.579 "core_count": 1 00:32:39.579 } 00:32:39.579 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:32:39.579 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.579 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:39.579 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.579 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:32:39.579 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2820619 00:32:39.579 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2820619 ']' 00:32:39.579 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2820619 00:32:39.579 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:32:39.579 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:39.579 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2820619 00:32:39.579 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:39.579 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:39.579 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2820619' 00:32:39.579 killing process with pid 2820619 00:32:39.579 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2820619 00:32:39.579 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2820619 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:32:39.840 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:32:39.840 [2024-11-20 17:59:36.749793] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:39.840 [2024-11-20 17:59:36.749866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820619 ] 00:32:39.840 [2024-11-20 17:59:36.831236] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.840 [2024-11-20 17:59:36.878073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.840 [2024-11-20 17:59:38.238268] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name ca6c6f40-e0cf-41e1-ae92-65a4ec9921c4 already exists 00:32:39.840 [2024-11-20 17:59:38.238313] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:ca6c6f40-e0cf-41e1-ae92-65a4ec9921c4 alias for bdev NVMe1n1 00:32:39.840 [2024-11-20 17:59:38.238322] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:32:39.840 Running I/O for 1 seconds... 00:32:39.840 28557.00 IOPS, 111.55 MiB/s 00:32:39.840 Latency(us) 00:32:39.840 [2024-11-20T16:59:39.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.840 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:32:39.840 NVMe0n1 : 1.01 28572.00 111.61 0.00 0.00 4471.67 2102.61 10813.44 00:32:39.840 [2024-11-20T16:59:39.756Z] =================================================================================================================== 00:32:39.840 [2024-11-20T16:59:39.756Z] Total : 28572.00 111.61 0.00 0.00 4471.67 2102.61 10813.44 00:32:39.840 Received shutdown signal, test time was about 1.000000 seconds 00:32:39.840 00:32:39.840 Latency(us) 00:32:39.840 [2024-11-20T16:59:39.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.840 [2024-11-20T16:59:39.756Z] =================================================================================================================== 00:32:39.840 [2024-11-20T16:59:39.756Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:39.840 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:39.840 rmmod nvme_tcp 00:32:39.840 rmmod nvme_fabrics 00:32:39.840 rmmod nvme_keyring 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:32:39.840 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 2820366 ']' 00:32:39.841 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 2820366 00:32:39.841 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2820366 ']' 00:32:39.841 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2820366 00:32:39.841 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:32:39.841 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:39.841 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2820366 00:32:40.102 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:40.102 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:40.102 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2820366' 00:32:40.102 killing process with pid 2820366 00:32:40.102 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2820366 00:32:40.102 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2820366 00:32:40.102 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:40.102 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:40.102 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:40.102 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:32:40.102 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:32:40.102 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:40.102 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:32:40.102 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:40.102 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:40.102 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.102 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:40.102 17:59:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.650 17:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:42.650 00:32:42.650 real 0m14.109s 00:32:42.650 user 0m17.732s 00:32:42.650 sys 0m6.467s 00:32:42.650 17:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:42.650 17:59:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:42.650 ************************************ 00:32:42.650 END TEST nvmf_multicontroller 00:32:42.650 ************************************ 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.650 ************************************ 00:32:42.650 START TEST nvmf_aer 00:32:42.650 ************************************ 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:32:42.650 * Looking for test storage... 00:32:42.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:42.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.650 --rc genhtml_branch_coverage=1 00:32:42.650 --rc genhtml_function_coverage=1 00:32:42.650 --rc genhtml_legend=1 00:32:42.650 --rc geninfo_all_blocks=1 00:32:42.650 --rc geninfo_unexecuted_blocks=1 00:32:42.650 00:32:42.650 ' 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:42.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.650 --rc genhtml_branch_coverage=1 00:32:42.650 --rc genhtml_function_coverage=1 00:32:42.650 --rc genhtml_legend=1 00:32:42.650 --rc geninfo_all_blocks=1 00:32:42.650 --rc geninfo_unexecuted_blocks=1 00:32:42.650 00:32:42.650 ' 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:42.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.650 --rc genhtml_branch_coverage=1 00:32:42.650 --rc genhtml_function_coverage=1 00:32:42.650 --rc genhtml_legend=1 00:32:42.650 --rc geninfo_all_blocks=1 00:32:42.650 --rc geninfo_unexecuted_blocks=1 00:32:42.650 00:32:42.650 ' 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:42.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.650 --rc genhtml_branch_coverage=1 00:32:42.650 --rc genhtml_function_coverage=1 00:32:42.650 --rc genhtml_legend=1 00:32:42.650 --rc geninfo_all_blocks=1 00:32:42.650 --rc geninfo_unexecuted_blocks=1 00:32:42.650 00:32:42.650 ' 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:32:42.650 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:42.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:32:42.651 17:59:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:50.794 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:50.794 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.794 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:50.795 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:50.795 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # is_hw=yes 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:50.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:50.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:32:50.795 00:32:50.795 --- 10.0.0.2 ping statistics --- 00:32:50.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.795 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:50.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:50.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:32:50.795 00:32:50.795 --- 10.0.0.1 ping statistics --- 00:32:50.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.795 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # return 0 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=2825362 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 2825362 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2825362 ']' 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:50.795 17:59:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:50.795 [2024-11-20 17:59:49.816744] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:50.795 [2024-11-20 17:59:49.816808] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.795 [2024-11-20 17:59:49.904007] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:50.795 [2024-11-20 17:59:49.952779] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.795 [2024-11-20 17:59:49.952833] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.795 [2024-11-20 17:59:49.952842] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.796 [2024-11-20 17:59:49.952849] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.796 [2024-11-20 17:59:49.952855] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.796 [2024-11-20 17:59:49.953006] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.796 [2024-11-20 17:59:49.953190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:50.796 [2024-11-20 17:59:49.953291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:50.796 [2024-11-20 17:59:49.953429] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.796 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:50.796 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:32:50.796 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:50.796 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:50.796 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:50.796 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:50.796 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:50.796 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.796 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:50.796 [2024-11-20 17:59:50.704410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:51.057 Malloc0 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:51.057 [2024-11-20 17:59:50.769989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.057 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:51.057 [ 00:32:51.057 { 00:32:51.057 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:51.057 "subtype": "Discovery", 00:32:51.057 "listen_addresses": [], 00:32:51.057 "allow_any_host": true, 00:32:51.057 "hosts": [] 00:32:51.057 }, 00:32:51.057 { 00:32:51.057 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:51.057 "subtype": "NVMe", 00:32:51.057 "listen_addresses": [ 00:32:51.057 { 00:32:51.057 "trtype": "TCP", 00:32:51.057 "adrfam": "IPv4", 00:32:51.057 "traddr": "10.0.0.2", 00:32:51.057 "trsvcid": "4420" 00:32:51.057 } 00:32:51.057 ], 00:32:51.057 "allow_any_host": true, 00:32:51.057 "hosts": [], 00:32:51.057 "serial_number": "SPDK00000000000001", 00:32:51.057 "model_number": "SPDK bdev Controller", 00:32:51.057 "max_namespaces": 2, 00:32:51.057 "min_cntlid": 1, 00:32:51.057 "max_cntlid": 65519, 00:32:51.057 "namespaces": [ 00:32:51.057 { 00:32:51.057 "nsid": 1, 00:32:51.057 "bdev_name": "Malloc0", 00:32:51.057 "name": "Malloc0", 00:32:51.058 "nguid": "7CEEADA6A58E43B2959F963D4C8287AB", 00:32:51.058 "uuid": "7ceeada6-a58e-43b2-959f-963d4c8287ab" 00:32:51.058 } 00:32:51.058 ] 00:32:51.058 } 00:32:51.058 ] 00:32:51.058 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.058 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:32:51.058 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:32:51.058 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2825415 00:32:51.058 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:32:51.058 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:32:51.058 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:32:51.058 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:51.058 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:32:51.058 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:32:51.058 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:32:51.058 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:51.058 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:32:51.058 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:32:51.058 17:59:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:51.318 Malloc1 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.318 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:51.319 Asynchronous Event Request test 00:32:51.319 Attaching to 10.0.0.2 00:32:51.319 Attached to 10.0.0.2 00:32:51.319 Registering asynchronous event callbacks... 00:32:51.319 Starting namespace attribute notice tests for all controllers... 00:32:51.319 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:32:51.319 aer_cb - Changed Namespace 00:32:51.319 Cleaning up... 00:32:51.319 [ 00:32:51.319 { 00:32:51.319 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:51.319 "subtype": "Discovery", 00:32:51.319 "listen_addresses": [], 00:32:51.319 "allow_any_host": true, 00:32:51.319 "hosts": [] 00:32:51.319 }, 00:32:51.319 { 00:32:51.319 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:51.319 "subtype": "NVMe", 00:32:51.319 "listen_addresses": [ 00:32:51.319 { 00:32:51.319 "trtype": "TCP", 00:32:51.319 "adrfam": "IPv4", 00:32:51.319 "traddr": "10.0.0.2", 00:32:51.319 "trsvcid": "4420" 00:32:51.319 } 00:32:51.319 ], 00:32:51.319 "allow_any_host": true, 00:32:51.319 "hosts": [], 00:32:51.319 "serial_number": "SPDK00000000000001", 00:32:51.319 "model_number": "SPDK bdev Controller", 00:32:51.319 "max_namespaces": 2, 00:32:51.319 "min_cntlid": 1, 00:32:51.319 "max_cntlid": 65519, 00:32:51.319 "namespaces": [ 00:32:51.319 { 00:32:51.319 "nsid": 1, 00:32:51.319 "bdev_name": "Malloc0", 00:32:51.319 "name": "Malloc0", 00:32:51.319 "nguid": "7CEEADA6A58E43B2959F963D4C8287AB", 00:32:51.319 "uuid": "7ceeada6-a58e-43b2-959f-963d4c8287ab" 00:32:51.319 }, 00:32:51.319 { 00:32:51.319 "nsid": 2, 00:32:51.319 "bdev_name": "Malloc1", 00:32:51.319 "name": "Malloc1", 00:32:51.319 "nguid": "55965AE1B2F2457C9996ADB1BBD842D8", 00:32:51.319 "uuid": "55965ae1-b2f2-457c-9996-adb1bbd842d8" 00:32:51.319 } 00:32:51.319 ] 00:32:51.319 } 00:32:51.319 ] 00:32:51.319 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.319 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2825415 00:32:51.319 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:32:51.319 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.319 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:51.319 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.319 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:32:51.319 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.319 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:51.579 rmmod nvme_tcp 00:32:51.579 rmmod nvme_fabrics 00:32:51.579 rmmod nvme_keyring 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 2825362 ']' 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 2825362 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2825362 ']' 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2825362 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2825362 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2825362' 00:32:51.579 killing process with pid 2825362 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2825362 00:32:51.579 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2825362 00:32:51.840 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:51.840 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:51.840 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:51.840 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:32:51.840 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:32:51.840 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:51.840 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:32:51.840 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:51.840 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:51.840 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.840 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.840 17:59:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.751 17:59:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:53.751 00:32:53.751 real 0m11.610s 00:32:53.751 user 0m8.665s 00:32:53.751 sys 0m6.147s 00:32:53.751 17:59:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:53.751 17:59:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:53.751 ************************************ 00:32:53.751 END TEST nvmf_aer 00:32:53.751 ************************************ 00:32:54.012 17:59:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:32:54.012 17:59:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:54.012 17:59:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.013 ************************************ 00:32:54.013 START TEST nvmf_async_init 00:32:54.013 ************************************ 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:32:54.013 * Looking for test storage... 00:32:54.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:54.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.013 --rc genhtml_branch_coverage=1 00:32:54.013 --rc genhtml_function_coverage=1 00:32:54.013 --rc genhtml_legend=1 00:32:54.013 --rc geninfo_all_blocks=1 00:32:54.013 --rc geninfo_unexecuted_blocks=1 00:32:54.013 00:32:54.013 ' 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:54.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.013 --rc genhtml_branch_coverage=1 00:32:54.013 --rc genhtml_function_coverage=1 00:32:54.013 --rc genhtml_legend=1 00:32:54.013 --rc geninfo_all_blocks=1 00:32:54.013 --rc geninfo_unexecuted_blocks=1 00:32:54.013 00:32:54.013 ' 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:54.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.013 --rc genhtml_branch_coverage=1 00:32:54.013 --rc genhtml_function_coverage=1 00:32:54.013 --rc genhtml_legend=1 00:32:54.013 --rc geninfo_all_blocks=1 00:32:54.013 --rc geninfo_unexecuted_blocks=1 00:32:54.013 00:32:54.013 ' 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:54.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.013 --rc genhtml_branch_coverage=1 00:32:54.013 --rc genhtml_function_coverage=1 00:32:54.013 --rc genhtml_legend=1 00:32:54.013 --rc geninfo_all_blocks=1 00:32:54.013 --rc geninfo_unexecuted_blocks=1 00:32:54.013 00:32:54.013 ' 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:54.013 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:54.274 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:54.274 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:54.274 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:54.274 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:54.274 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:54.274 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:54.274 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:54.274 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:32:54.274 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:54.274 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:54.274 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:54.274 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:54.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5ce0fbeeb2964af38332febe963b6714 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:32:54.275 17:59:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:02.411 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:02.412 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:02.412 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:02.412 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:02.412 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # is_hw=yes 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:02.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:02.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:33:02.412 00:33:02.412 --- 10.0.0.2 ping statistics --- 00:33:02.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.412 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:02.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:02.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:33:02.412 00:33:02.412 --- 10.0.0.1 ping statistics --- 00:33:02.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.412 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # return 0 00:33:02.412 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=2829778 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 2829778 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2829778 ']' 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:02.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:02.413 18:00:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:02.413 [2024-11-20 18:00:01.595016] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:33:02.413 [2024-11-20 18:00:01.595085] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:02.413 [2024-11-20 18:00:01.681716] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.413 [2024-11-20 18:00:01.728667] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:02.413 [2024-11-20 18:00:01.728713] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:02.413 [2024-11-20 18:00:01.728722] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:02.413 [2024-11-20 18:00:01.728730] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:02.413 [2024-11-20 18:00:01.728736] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:02.413 [2024-11-20 18:00:01.728759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:02.674 [2024-11-20 18:00:02.457219] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:02.674 null0 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5ce0fbeeb2964af38332febe963b6714 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:02.674 [2024-11-20 18:00:02.517598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.674 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:02.935 nvme0n1 00:33:02.935 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.935 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:33:02.935 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.935 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:02.935 [ 00:33:02.935 { 00:33:02.935 "name": "nvme0n1", 00:33:02.935 "aliases": [ 00:33:02.935 "5ce0fbee-b296-4af3-8332-febe963b6714" 00:33:02.935 ], 00:33:02.935 "product_name": "NVMe disk", 00:33:02.935 "block_size": 512, 00:33:02.935 "num_blocks": 2097152, 00:33:02.935 "uuid": "5ce0fbee-b296-4af3-8332-febe963b6714", 00:33:02.935 "numa_id": 0, 00:33:02.935 "assigned_rate_limits": { 00:33:02.935 "rw_ios_per_sec": 0, 00:33:02.935 "rw_mbytes_per_sec": 0, 00:33:02.935 "r_mbytes_per_sec": 0, 00:33:02.935 "w_mbytes_per_sec": 0 00:33:02.935 }, 00:33:02.935 "claimed": false, 00:33:02.935 "zoned": false, 00:33:02.935 "supported_io_types": { 00:33:02.935 "read": true, 00:33:02.935 "write": true, 00:33:02.935 "unmap": false, 00:33:02.935 "flush": true, 00:33:02.935 "reset": true, 00:33:02.935 "nvme_admin": true, 00:33:02.935 "nvme_io": true, 00:33:02.935 "nvme_io_md": false, 00:33:02.935 "write_zeroes": true, 00:33:02.935 "zcopy": false, 00:33:02.935 "get_zone_info": false, 00:33:02.935 "zone_management": false, 00:33:02.935 "zone_append": false, 00:33:02.935 "compare": true, 00:33:02.935 "compare_and_write": true, 00:33:02.935 "abort": true, 00:33:02.935 "seek_hole": false, 00:33:02.935 "seek_data": false, 00:33:02.936 "copy": true, 00:33:02.936 "nvme_iov_md": false 00:33:02.936 }, 00:33:02.936 "memory_domains": [ 00:33:02.936 { 00:33:02.936 "dma_device_id": "system", 00:33:02.936 "dma_device_type": 1 00:33:02.936 } 00:33:02.936 ], 00:33:02.936 "driver_specific": { 00:33:02.936 "nvme": [ 00:33:02.936 { 00:33:02.936 "trid": { 00:33:02.936 "trtype": "TCP", 00:33:02.936 "adrfam": "IPv4", 00:33:02.936 "traddr": "10.0.0.2", 00:33:02.936 "trsvcid": "4420", 00:33:02.936 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:02.936 }, 00:33:02.936 "ctrlr_data": { 00:33:02.936 "cntlid": 1, 00:33:02.936 "vendor_id": "0x8086", 00:33:02.936 "model_number": "SPDK bdev Controller", 00:33:02.936 "serial_number": "00000000000000000000", 00:33:02.936 "firmware_revision": "24.09.1", 00:33:02.936 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:02.936 "oacs": { 00:33:02.936 "security": 0, 00:33:02.936 "format": 0, 00:33:02.936 "firmware": 0, 00:33:02.936 "ns_manage": 0 00:33:02.936 }, 00:33:02.936 "multi_ctrlr": true, 00:33:02.936 "ana_reporting": false 00:33:02.936 }, 00:33:02.936 "vs": { 00:33:02.936 "nvme_version": "1.3" 00:33:02.936 }, 00:33:02.936 "ns_data": { 00:33:02.936 "id": 1, 00:33:02.936 "can_share": true 00:33:02.936 } 00:33:02.936 } 00:33:02.936 ], 00:33:02.936 "mp_policy": "active_passive" 00:33:02.936 } 00:33:02.936 } 00:33:02.936 ] 00:33:02.936 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.936 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:33:02.936 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.936 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:02.936 [2024-11-20 18:00:02.794058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:02.936 [2024-11-20 18:00:02.794140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95dd00 (9): Bad file descriptor 00:33:03.198 [2024-11-20 18:00:02.926264] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:03.198 [ 00:33:03.198 { 00:33:03.198 "name": "nvme0n1", 00:33:03.198 "aliases": [ 00:33:03.198 "5ce0fbee-b296-4af3-8332-febe963b6714" 00:33:03.198 ], 00:33:03.198 "product_name": "NVMe disk", 00:33:03.198 "block_size": 512, 00:33:03.198 "num_blocks": 2097152, 00:33:03.198 "uuid": "5ce0fbee-b296-4af3-8332-febe963b6714", 00:33:03.198 "numa_id": 0, 00:33:03.198 "assigned_rate_limits": { 00:33:03.198 "rw_ios_per_sec": 0, 00:33:03.198 "rw_mbytes_per_sec": 0, 00:33:03.198 "r_mbytes_per_sec": 0, 00:33:03.198 "w_mbytes_per_sec": 0 00:33:03.198 }, 00:33:03.198 "claimed": false, 00:33:03.198 "zoned": false, 00:33:03.198 "supported_io_types": { 00:33:03.198 "read": true, 00:33:03.198 "write": true, 00:33:03.198 "unmap": false, 00:33:03.198 "flush": true, 00:33:03.198 "reset": true, 00:33:03.198 "nvme_admin": true, 00:33:03.198 "nvme_io": true, 00:33:03.198 "nvme_io_md": false, 00:33:03.198 "write_zeroes": true, 00:33:03.198 "zcopy": false, 00:33:03.198 "get_zone_info": false, 00:33:03.198 "zone_management": false, 00:33:03.198 "zone_append": false, 00:33:03.198 "compare": true, 00:33:03.198 "compare_and_write": true, 00:33:03.198 "abort": true, 00:33:03.198 "seek_hole": false, 00:33:03.198 "seek_data": false, 00:33:03.198 "copy": true, 00:33:03.198 "nvme_iov_md": false 00:33:03.198 }, 00:33:03.198 "memory_domains": [ 00:33:03.198 { 00:33:03.198 "dma_device_id": "system", 00:33:03.198 "dma_device_type": 1 00:33:03.198 } 00:33:03.198 ], 00:33:03.198 "driver_specific": { 00:33:03.198 "nvme": [ 00:33:03.198 { 00:33:03.198 "trid": { 00:33:03.198 "trtype": "TCP", 00:33:03.198 "adrfam": "IPv4", 00:33:03.198 "traddr": "10.0.0.2", 00:33:03.198 "trsvcid": "4420", 00:33:03.198 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:03.198 }, 00:33:03.198 "ctrlr_data": { 00:33:03.198 "cntlid": 2, 00:33:03.198 "vendor_id": "0x8086", 00:33:03.198 "model_number": "SPDK bdev Controller", 00:33:03.198 "serial_number": "00000000000000000000", 00:33:03.198 "firmware_revision": "24.09.1", 00:33:03.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:03.198 "oacs": { 00:33:03.198 "security": 0, 00:33:03.198 "format": 0, 00:33:03.198 "firmware": 0, 00:33:03.198 "ns_manage": 0 00:33:03.198 }, 00:33:03.198 "multi_ctrlr": true, 00:33:03.198 "ana_reporting": false 00:33:03.198 }, 00:33:03.198 "vs": { 00:33:03.198 "nvme_version": "1.3" 00:33:03.198 }, 00:33:03.198 "ns_data": { 00:33:03.198 "id": 1, 00:33:03.198 "can_share": true 00:33:03.198 } 00:33:03.198 } 00:33:03.198 ], 00:33:03.198 "mp_policy": "active_passive" 00:33:03.198 } 00:33:03.198 } 00:33:03.198 ] 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.rokjUPRre8 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.rokjUPRre8 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.rokjUPRre8 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.198 18:00:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:03.198 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.198 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:33:03.198 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.198 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:03.198 [2024-11-20 18:00:03.014744] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:03.198 [2024-11-20 18:00:03.014906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:03.198 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.198 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:33:03.198 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.198 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:03.198 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.198 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:33:03.198 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.198 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:03.198 [2024-11-20 18:00:03.038823] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:03.198 nvme0n1 00:33:03.198 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.198 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:33:03.198 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.198 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:03.459 [ 00:33:03.459 { 00:33:03.459 "name": "nvme0n1", 00:33:03.459 "aliases": [ 00:33:03.459 "5ce0fbee-b296-4af3-8332-febe963b6714" 00:33:03.459 ], 00:33:03.459 "product_name": "NVMe disk", 00:33:03.459 "block_size": 512, 00:33:03.459 "num_blocks": 2097152, 00:33:03.459 "uuid": "5ce0fbee-b296-4af3-8332-febe963b6714", 00:33:03.459 "numa_id": 0, 00:33:03.459 "assigned_rate_limits": { 00:33:03.459 "rw_ios_per_sec": 0, 00:33:03.459 "rw_mbytes_per_sec": 0, 00:33:03.459 "r_mbytes_per_sec": 0, 00:33:03.459 "w_mbytes_per_sec": 0 00:33:03.459 }, 00:33:03.459 "claimed": false, 00:33:03.459 "zoned": false, 00:33:03.459 "supported_io_types": { 00:33:03.459 "read": true, 00:33:03.459 "write": true, 00:33:03.459 "unmap": false, 00:33:03.459 "flush": true, 00:33:03.459 "reset": true, 00:33:03.459 "nvme_admin": true, 00:33:03.459 "nvme_io": true, 00:33:03.459 "nvme_io_md": false, 00:33:03.459 "write_zeroes": true, 00:33:03.459 "zcopy": false, 00:33:03.459 "get_zone_info": false, 00:33:03.459 "zone_management": false, 00:33:03.459 "zone_append": false, 00:33:03.459 "compare": true, 00:33:03.459 "compare_and_write": true, 00:33:03.459 "abort": true, 00:33:03.459 "seek_hole": false, 00:33:03.459 "seek_data": false, 00:33:03.459 "copy": true, 00:33:03.459 "nvme_iov_md": false 00:33:03.459 }, 00:33:03.459 "memory_domains": [ 00:33:03.459 { 00:33:03.459 "dma_device_id": "system", 00:33:03.459 "dma_device_type": 1 00:33:03.459 } 00:33:03.459 ], 00:33:03.459 "driver_specific": { 00:33:03.459 "nvme": [ 00:33:03.459 { 00:33:03.459 "trid": { 00:33:03.459 "trtype": "TCP", 00:33:03.459 "adrfam": "IPv4", 00:33:03.459 "traddr": "10.0.0.2", 00:33:03.459 "trsvcid": "4421", 00:33:03.459 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:03.459 }, 00:33:03.459 "ctrlr_data": { 00:33:03.459 "cntlid": 3, 00:33:03.459 "vendor_id": "0x8086", 00:33:03.459 "model_number": "SPDK bdev Controller", 00:33:03.459 "serial_number": "00000000000000000000", 00:33:03.459 "firmware_revision": "24.09.1", 00:33:03.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:03.459 "oacs": { 00:33:03.459 "security": 0, 00:33:03.459 "format": 0, 00:33:03.459 "firmware": 0, 00:33:03.459 "ns_manage": 0 00:33:03.459 }, 00:33:03.459 "multi_ctrlr": true, 00:33:03.459 "ana_reporting": false 00:33:03.459 }, 00:33:03.459 "vs": { 00:33:03.459 "nvme_version": "1.3" 00:33:03.459 }, 00:33:03.459 "ns_data": { 00:33:03.459 "id": 1, 00:33:03.459 "can_share": true 00:33:03.459 } 00:33:03.459 } 00:33:03.459 ], 00:33:03.459 "mp_policy": "active_passive" 00:33:03.459 } 00:33:03.459 } 00:33:03.459 ] 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.rokjUPRre8 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:03.459 rmmod nvme_tcp 00:33:03.459 rmmod nvme_fabrics 00:33:03.459 rmmod nvme_keyring 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 2829778 ']' 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 2829778 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2829778 ']' 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2829778 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2829778 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2829778' 00:33:03.459 killing process with pid 2829778 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2829778 00:33:03.459 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2829778 00:33:03.720 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:03.720 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:03.720 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:03.720 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:33:03.720 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:33:03.720 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:03.720 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:33:03.720 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:03.720 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:03.720 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.720 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:03.720 18:00:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.630 18:00:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:05.891 00:33:05.891 real 0m11.836s 00:33:05.891 user 0m4.160s 00:33:05.891 sys 0m6.253s 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:05.891 ************************************ 00:33:05.891 END TEST nvmf_async_init 00:33:05.891 ************************************ 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.891 ************************************ 00:33:05.891 START TEST dma 00:33:05.891 ************************************ 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:33:05.891 * Looking for test storage... 00:33:05.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:05.891 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:33:06.151 18:00:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:06.151 18:00:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:06.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.151 --rc genhtml_branch_coverage=1 00:33:06.151 --rc genhtml_function_coverage=1 00:33:06.151 --rc genhtml_legend=1 00:33:06.151 --rc geninfo_all_blocks=1 00:33:06.151 --rc geninfo_unexecuted_blocks=1 00:33:06.151 00:33:06.151 ' 00:33:06.151 18:00:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:06.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.151 --rc genhtml_branch_coverage=1 00:33:06.151 --rc genhtml_function_coverage=1 00:33:06.151 --rc genhtml_legend=1 00:33:06.151 --rc geninfo_all_blocks=1 00:33:06.151 --rc geninfo_unexecuted_blocks=1 00:33:06.151 00:33:06.151 ' 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:06.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.152 --rc genhtml_branch_coverage=1 00:33:06.152 --rc genhtml_function_coverage=1 00:33:06.152 --rc genhtml_legend=1 00:33:06.152 --rc geninfo_all_blocks=1 00:33:06.152 --rc geninfo_unexecuted_blocks=1 00:33:06.152 00:33:06.152 ' 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:06.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.152 --rc genhtml_branch_coverage=1 00:33:06.152 --rc genhtml_function_coverage=1 00:33:06.152 --rc genhtml_legend=1 00:33:06.152 --rc geninfo_all_blocks=1 00:33:06.152 --rc geninfo_unexecuted_blocks=1 00:33:06.152 00:33:06.152 ' 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:06.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:33:06.152 00:33:06.152 real 0m0.243s 00:33:06.152 user 0m0.143s 00:33:06.152 sys 0m0.115s 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:33:06.152 ************************************ 00:33:06.152 END TEST dma 00:33:06.152 ************************************ 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.152 ************************************ 00:33:06.152 START TEST nvmf_identify 00:33:06.152 ************************************ 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:33:06.152 * Looking for test storage... 00:33:06.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:33:06.152 18:00:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:06.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.413 --rc genhtml_branch_coverage=1 00:33:06.413 --rc genhtml_function_coverage=1 00:33:06.413 --rc genhtml_legend=1 00:33:06.413 --rc geninfo_all_blocks=1 00:33:06.413 --rc geninfo_unexecuted_blocks=1 00:33:06.413 00:33:06.413 ' 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:06.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.413 --rc genhtml_branch_coverage=1 00:33:06.413 --rc genhtml_function_coverage=1 00:33:06.413 --rc genhtml_legend=1 00:33:06.413 --rc geninfo_all_blocks=1 00:33:06.413 --rc geninfo_unexecuted_blocks=1 00:33:06.413 00:33:06.413 ' 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:06.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.413 --rc genhtml_branch_coverage=1 00:33:06.413 --rc genhtml_function_coverage=1 00:33:06.413 --rc genhtml_legend=1 00:33:06.413 --rc geninfo_all_blocks=1 00:33:06.413 --rc geninfo_unexecuted_blocks=1 00:33:06.413 00:33:06.413 ' 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:06.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.413 --rc genhtml_branch_coverage=1 00:33:06.413 --rc genhtml_function_coverage=1 00:33:06.413 --rc genhtml_legend=1 00:33:06.413 --rc geninfo_all_blocks=1 00:33:06.413 --rc geninfo_unexecuted_blocks=1 00:33:06.413 00:33:06.413 ' 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:06.413 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:06.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:33:06.414 18:00:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:14.547 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:14.547 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:33:14.547 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:14.547 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:14.547 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:14.547 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:14.547 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:14.547 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:33:14.547 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:14.547 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:33:14.547 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:33:14.547 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:14.548 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:14.548 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:14.548 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:14.548 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # is_hw=yes 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:14.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:14.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:33:14.548 00:33:14.548 --- 10.0.0.2 ping statistics --- 00:33:14.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.548 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:14.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:14.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:33:14.548 00:33:14.548 --- 10.0.0.1 ping statistics --- 00:33:14.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.548 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # return 0 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2834943 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2834943 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2834943 ']' 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:14.548 18:00:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:14.548 [2024-11-20 18:00:13.737076] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:33:14.548 [2024-11-20 18:00:13.737142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.548 [2024-11-20 18:00:13.826915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:14.548 [2024-11-20 18:00:13.877718] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:14.548 [2024-11-20 18:00:13.877772] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:14.548 [2024-11-20 18:00:13.877781] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:14.548 [2024-11-20 18:00:13.877789] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:14.548 [2024-11-20 18:00:13.877795] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:14.548 [2024-11-20 18:00:13.877956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.548 [2024-11-20 18:00:13.878113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:14.548 [2024-11-20 18:00:13.878282] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.548 [2024-11-20 18:00:13.878282] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:14.810 [2024-11-20 18:00:14.575593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:14.810 Malloc0 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:14.810 [2024-11-20 18:00:14.677594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:14.810 [ 00:33:14.810 { 00:33:14.810 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:14.810 "subtype": "Discovery", 00:33:14.810 "listen_addresses": [ 00:33:14.810 { 00:33:14.810 "trtype": "TCP", 00:33:14.810 "adrfam": "IPv4", 00:33:14.810 "traddr": "10.0.0.2", 00:33:14.810 "trsvcid": "4420" 00:33:14.810 } 00:33:14.810 ], 00:33:14.810 "allow_any_host": true, 00:33:14.810 "hosts": [] 00:33:14.810 }, 00:33:14.810 { 00:33:14.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:14.810 "subtype": "NVMe", 00:33:14.810 "listen_addresses": [ 00:33:14.810 { 00:33:14.810 "trtype": "TCP", 00:33:14.810 "adrfam": "IPv4", 00:33:14.810 "traddr": "10.0.0.2", 00:33:14.810 "trsvcid": "4420" 00:33:14.810 } 00:33:14.810 ], 00:33:14.810 "allow_any_host": true, 00:33:14.810 "hosts": [], 00:33:14.810 "serial_number": "SPDK00000000000001", 00:33:14.810 "model_number": "SPDK bdev Controller", 00:33:14.810 "max_namespaces": 32, 00:33:14.810 "min_cntlid": 1, 00:33:14.810 "max_cntlid": 65519, 00:33:14.810 "namespaces": [ 00:33:14.810 { 00:33:14.810 "nsid": 1, 00:33:14.810 "bdev_name": "Malloc0", 00:33:14.810 "name": "Malloc0", 00:33:14.810 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:33:14.810 "eui64": "ABCDEF0123456789", 00:33:14.810 "uuid": "7eca3669-ad8f-4886-8cdc-6048dfaff646" 00:33:14.810 } 00:33:14.810 ] 00:33:14.810 } 00:33:14.810 ] 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.810 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:33:15.074 [2024-11-20 18:00:14.728130] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:33:15.074 [2024-11-20 18:00:14.728184] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835038 ] 00:33:15.074 [2024-11-20 18:00:14.766358] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:33:15.074 [2024-11-20 18:00:14.766421] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:33:15.074 [2024-11-20 18:00:14.766426] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:33:15.074 [2024-11-20 18:00:14.766444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:33:15.074 [2024-11-20 18:00:14.766456] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:33:15.074 [2024-11-20 18:00:14.767310] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:33:15.074 [2024-11-20 18:00:14.767355] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc49ad0 0 00:33:15.074 [2024-11-20 18:00:14.781181] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:33:15.074 [2024-11-20 18:00:14.781200] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:33:15.074 [2024-11-20 18:00:14.781205] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:33:15.074 [2024-11-20 18:00:14.781209] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:33:15.074 [2024-11-20 18:00:14.781252] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.074 [2024-11-20 18:00:14.781258] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.074 [2024-11-20 18:00:14.781263] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc49ad0) 00:33:15.074 [2024-11-20 18:00:14.781278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:15.074 [2024-11-20 18:00:14.781303] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f300, cid 0, qid 0 00:33:15.074 [2024-11-20 18:00:14.789175] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.074 [2024-11-20 18:00:14.789192] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.074 [2024-11-20 18:00:14.789196] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.074 [2024-11-20 18:00:14.789201] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f300) on tqpair=0xc49ad0 00:33:15.074 [2024-11-20 18:00:14.789213] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:33:15.074 [2024-11-20 18:00:14.789222] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:33:15.074 [2024-11-20 18:00:14.789227] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:33:15.074 [2024-11-20 18:00:14.789244] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.074 [2024-11-20 18:00:14.789248] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.074 [2024-11-20 18:00:14.789251] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc49ad0) 00:33:15.074 [2024-11-20 18:00:14.789261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.074 [2024-11-20 18:00:14.789276] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f300, cid 0, qid 0 00:33:15.074 [2024-11-20 18:00:14.789504] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.074 [2024-11-20 18:00:14.789510] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.074 [2024-11-20 18:00:14.789514] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.074 [2024-11-20 18:00:14.789518] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f300) on tqpair=0xc49ad0 00:33:15.074 [2024-11-20 18:00:14.789523] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:33:15.074 [2024-11-20 18:00:14.789532] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:33:15.074 [2024-11-20 18:00:14.789539] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.074 [2024-11-20 18:00:14.789543] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.074 [2024-11-20 18:00:14.789546] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc49ad0) 00:33:15.074 [2024-11-20 18:00:14.789553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.074 [2024-11-20 18:00:14.789564] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f300, cid 0, qid 0 00:33:15.074 [2024-11-20 18:00:14.789797] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.074 [2024-11-20 18:00:14.789804] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.074 [2024-11-20 18:00:14.789807] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.074 [2024-11-20 18:00:14.789811] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f300) on tqpair=0xc49ad0 00:33:15.074 [2024-11-20 18:00:14.789817] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:33:15.074 [2024-11-20 18:00:14.789826] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:33:15.074 [2024-11-20 18:00:14.789832] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.074 [2024-11-20 18:00:14.789836] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.074 [2024-11-20 18:00:14.789839] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc49ad0) 00:33:15.074 [2024-11-20 18:00:14.789846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.074 [2024-11-20 18:00:14.789856] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f300, cid 0, qid 0 00:33:15.074 [2024-11-20 18:00:14.790061] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.074 [2024-11-20 18:00:14.790071] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.074 [2024-11-20 18:00:14.790074] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.790078] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f300) on tqpair=0xc49ad0 00:33:15.075 [2024-11-20 18:00:14.790084] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:33:15.075 [2024-11-20 18:00:14.790094] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.790097] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.790101] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc49ad0) 00:33:15.075 [2024-11-20 18:00:14.790108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.075 [2024-11-20 18:00:14.790118] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f300, cid 0, qid 0 00:33:15.075 [2024-11-20 18:00:14.790350] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.075 [2024-11-20 18:00:14.790357] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.075 [2024-11-20 18:00:14.790360] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.790364] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f300) on tqpair=0xc49ad0 00:33:15.075 [2024-11-20 18:00:14.790369] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:33:15.075 [2024-11-20 18:00:14.790374] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:33:15.075 [2024-11-20 18:00:14.790382] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:33:15.075 [2024-11-20 18:00:14.790488] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:33:15.075 [2024-11-20 18:00:14.790492] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:33:15.075 [2024-11-20 18:00:14.790501] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.790505] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.790509] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc49ad0) 00:33:15.075 [2024-11-20 18:00:14.790516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.075 [2024-11-20 18:00:14.790527] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f300, cid 0, qid 0 00:33:15.075 [2024-11-20 18:00:14.790742] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.075 [2024-11-20 18:00:14.790748] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.075 [2024-11-20 18:00:14.790752] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.790756] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f300) on tqpair=0xc49ad0 00:33:15.075 [2024-11-20 18:00:14.790761] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:33:15.075 [2024-11-20 18:00:14.790771] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.790774] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.790778] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc49ad0) 00:33:15.075 [2024-11-20 18:00:14.790785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.075 [2024-11-20 18:00:14.790795] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f300, cid 0, qid 0 00:33:15.075 [2024-11-20 18:00:14.791045] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.075 [2024-11-20 18:00:14.791052] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.075 [2024-11-20 18:00:14.791055] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.791059] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f300) on tqpair=0xc49ad0 00:33:15.075 [2024-11-20 18:00:14.791064] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:33:15.075 [2024-11-20 18:00:14.791071] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:33:15.075 [2024-11-20 18:00:14.791079] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:33:15.075 [2024-11-20 18:00:14.791087] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:33:15.075 [2024-11-20 18:00:14.791097] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.791100] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc49ad0) 00:33:15.075 [2024-11-20 18:00:14.791108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.075 [2024-11-20 18:00:14.791118] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f300, cid 0, qid 0 00:33:15.075 [2024-11-20 18:00:14.791382] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:15.075 [2024-11-20 18:00:14.791390] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:15.075 [2024-11-20 18:00:14.791393] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.791398] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc49ad0): datao=0, datal=4096, cccid=0 00:33:15.075 [2024-11-20 18:00:14.791403] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9f300) on tqpair(0xc49ad0): expected_datao=0, payload_size=4096 00:33:15.075 [2024-11-20 18:00:14.791407] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.791428] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.791433] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.791652] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.075 [2024-11-20 18:00:14.791658] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.075 [2024-11-20 18:00:14.791662] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.791665] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f300) on tqpair=0xc49ad0 00:33:15.075 [2024-11-20 18:00:14.791675] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:33:15.075 [2024-11-20 18:00:14.791680] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:33:15.075 [2024-11-20 18:00:14.791684] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:33:15.075 [2024-11-20 18:00:14.791690] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:33:15.075 [2024-11-20 18:00:14.791694] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:33:15.075 [2024-11-20 18:00:14.791699] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:33:15.075 [2024-11-20 18:00:14.791708] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:33:15.075 [2024-11-20 18:00:14.791715] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.791722] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.791725] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc49ad0) 00:33:15.075 [2024-11-20 18:00:14.791733] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:15.075 [2024-11-20 18:00:14.791744] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f300, cid 0, qid 0 00:33:15.075 [2024-11-20 18:00:14.791965] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.075 [2024-11-20 18:00:14.791971] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.075 [2024-11-20 18:00:14.791974] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.791978] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f300) on tqpair=0xc49ad0 00:33:15.075 [2024-11-20 18:00:14.791986] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.791990] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.791994] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc49ad0) 00:33:15.075 [2024-11-20 18:00:14.792000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.075 [2024-11-20 18:00:14.792006] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.075 [2024-11-20 18:00:14.792010] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.792014] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc49ad0) 00:33:15.076 [2024-11-20 18:00:14.792020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.076 [2024-11-20 18:00:14.792026] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.792030] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.792033] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc49ad0) 00:33:15.076 [2024-11-20 18:00:14.792039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.076 [2024-11-20 18:00:14.792045] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.792049] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.792052] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc49ad0) 00:33:15.076 [2024-11-20 18:00:14.792058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.076 [2024-11-20 18:00:14.792063] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:33:15.076 [2024-11-20 18:00:14.792075] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:33:15.076 [2024-11-20 18:00:14.792081] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.792085] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc49ad0) 00:33:15.076 [2024-11-20 18:00:14.792092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.076 [2024-11-20 18:00:14.792104] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f300, cid 0, qid 0 00:33:15.076 [2024-11-20 18:00:14.792109] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f480, cid 1, qid 0 00:33:15.076 [2024-11-20 18:00:14.792114] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f600, cid 2, qid 0 00:33:15.076 [2024-11-20 18:00:14.792119] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f780, cid 3, qid 0 00:33:15.076 [2024-11-20 18:00:14.792126] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f900, cid 4, qid 0 00:33:15.076 [2024-11-20 18:00:14.792409] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.076 [2024-11-20 18:00:14.792416] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.076 [2024-11-20 18:00:14.792419] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.792423] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f900) on tqpair=0xc49ad0 00:33:15.076 [2024-11-20 18:00:14.792429] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:33:15.076 [2024-11-20 18:00:14.792434] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:33:15.076 [2024-11-20 18:00:14.792445] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.792449] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc49ad0) 00:33:15.076 [2024-11-20 18:00:14.792455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.076 [2024-11-20 18:00:14.792467] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f900, cid 4, qid 0 00:33:15.076 [2024-11-20 18:00:14.792664] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:15.076 [2024-11-20 18:00:14.792671] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:15.076 [2024-11-20 18:00:14.792674] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.792678] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc49ad0): datao=0, datal=4096, cccid=4 00:33:15.076 [2024-11-20 18:00:14.792683] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9f900) on tqpair(0xc49ad0): expected_datao=0, payload_size=4096 00:33:15.076 [2024-11-20 18:00:14.792687] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.792694] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.792698] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.792860] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.076 [2024-11-20 18:00:14.792866] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.076 [2024-11-20 18:00:14.792870] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.792874] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f900) on tqpair=0xc49ad0 00:33:15.076 [2024-11-20 18:00:14.792887] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:33:15.076 [2024-11-20 18:00:14.792918] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.792923] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc49ad0) 00:33:15.076 [2024-11-20 18:00:14.792929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.076 [2024-11-20 18:00:14.792936] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.792940] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.792944] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc49ad0) 00:33:15.076 [2024-11-20 18:00:14.792950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.076 [2024-11-20 18:00:14.792963] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f900, cid 4, qid 0 00:33:15.076 [2024-11-20 18:00:14.792969] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9fa80, cid 5, qid 0 00:33:15.076 [2024-11-20 18:00:14.797175] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:15.076 [2024-11-20 18:00:14.797184] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:15.076 [2024-11-20 18:00:14.797191] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.797195] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc49ad0): datao=0, datal=1024, cccid=4 00:33:15.076 [2024-11-20 18:00:14.797199] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9f900) on tqpair(0xc49ad0): expected_datao=0, payload_size=1024 00:33:15.076 [2024-11-20 18:00:14.797204] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.797211] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.797214] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.797220] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.076 [2024-11-20 18:00:14.797226] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.076 [2024-11-20 18:00:14.797230] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.797234] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9fa80) on tqpair=0xc49ad0 00:33:15.076 [2024-11-20 18:00:14.837191] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.076 [2024-11-20 18:00:14.837204] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.076 [2024-11-20 18:00:14.837207] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.837211] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f900) on tqpair=0xc49ad0 00:33:15.076 [2024-11-20 18:00:14.837224] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.076 [2024-11-20 18:00:14.837228] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc49ad0) 00:33:15.076 [2024-11-20 18:00:14.837236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.076 [2024-11-20 18:00:14.837253] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f900, cid 4, qid 0 00:33:15.076 [2024-11-20 18:00:14.837482] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:15.076 [2024-11-20 18:00:14.837488] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:15.077 [2024-11-20 18:00:14.837491] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:15.077 [2024-11-20 18:00:14.837495] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc49ad0): datao=0, datal=3072, cccid=4 00:33:15.077 [2024-11-20 18:00:14.837500] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9f900) on tqpair(0xc49ad0): expected_datao=0, payload_size=3072 00:33:15.077 [2024-11-20 18:00:14.837505] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.077 [2024-11-20 18:00:14.837542] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:15.077 [2024-11-20 18:00:14.837546] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:15.077 [2024-11-20 18:00:14.837711] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.077 [2024-11-20 18:00:14.837717] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.077 [2024-11-20 18:00:14.837721] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.077 [2024-11-20 18:00:14.837725] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f900) on tqpair=0xc49ad0 00:33:15.077 [2024-11-20 18:00:14.837733] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.077 [2024-11-20 18:00:14.837737] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc49ad0) 00:33:15.077 [2024-11-20 18:00:14.837743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.077 [2024-11-20 18:00:14.837758] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f900, cid 4, qid 0 00:33:15.077 [2024-11-20 18:00:14.838006] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:15.077 [2024-11-20 18:00:14.838013] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:15.077 [2024-11-20 18:00:14.838016] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:15.077 [2024-11-20 18:00:14.838024] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc49ad0): datao=0, datal=8, cccid=4 00:33:15.077 [2024-11-20 18:00:14.838029] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9f900) on tqpair(0xc49ad0): expected_datao=0, payload_size=8 00:33:15.077 [2024-11-20 18:00:14.838033] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.077 [2024-11-20 18:00:14.838040] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:15.077 [2024-11-20 18:00:14.838043] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:15.077 [2024-11-20 18:00:14.878340] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.077 [2024-11-20 18:00:14.878352] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.077 [2024-11-20 18:00:14.878356] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.077 [2024-11-20 18:00:14.878360] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f900) on tqpair=0xc49ad0 00:33:15.077 ===================================================== 00:33:15.077 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:15.077 ===================================================== 00:33:15.077 Controller Capabilities/Features 00:33:15.077 ================================ 00:33:15.077 Vendor ID: 0000 00:33:15.077 Subsystem Vendor ID: 0000 00:33:15.077 Serial Number: .................... 00:33:15.077 Model Number: ........................................ 00:33:15.077 Firmware Version: 24.09.1 00:33:15.077 Recommended Arb Burst: 0 00:33:15.077 IEEE OUI Identifier: 00 00 00 00:33:15.077 Multi-path I/O 00:33:15.077 May have multiple subsystem ports: No 00:33:15.077 May have multiple controllers: No 00:33:15.077 Associated with SR-IOV VF: No 00:33:15.077 Max Data Transfer Size: 131072 00:33:15.077 Max Number of Namespaces: 0 00:33:15.077 Max Number of I/O Queues: 1024 00:33:15.077 NVMe Specification Version (VS): 1.3 00:33:15.077 NVMe Specification Version (Identify): 1.3 00:33:15.077 Maximum Queue Entries: 128 00:33:15.077 Contiguous Queues Required: Yes 00:33:15.077 Arbitration Mechanisms Supported 00:33:15.077 Weighted Round Robin: Not Supported 00:33:15.077 Vendor Specific: Not Supported 00:33:15.077 Reset Timeout: 15000 ms 00:33:15.077 Doorbell Stride: 4 bytes 00:33:15.077 NVM Subsystem Reset: Not Supported 00:33:15.077 Command Sets Supported 00:33:15.077 NVM Command Set: Supported 00:33:15.077 Boot Partition: Not Supported 00:33:15.077 Memory Page Size Minimum: 4096 bytes 00:33:15.077 Memory Page Size Maximum: 4096 bytes 00:33:15.077 Persistent Memory Region: Not Supported 00:33:15.077 Optional Asynchronous Events Supported 00:33:15.077 Namespace Attribute Notices: Not Supported 00:33:15.077 Firmware Activation Notices: Not Supported 00:33:15.077 ANA Change Notices: Not Supported 00:33:15.077 PLE Aggregate Log Change Notices: Not Supported 00:33:15.077 LBA Status Info Alert Notices: Not Supported 00:33:15.077 EGE Aggregate Log Change Notices: Not Supported 00:33:15.077 Normal NVM Subsystem Shutdown event: Not Supported 00:33:15.077 Zone Descriptor Change Notices: Not Supported 00:33:15.077 Discovery Log Change Notices: Supported 00:33:15.077 Controller Attributes 00:33:15.077 128-bit Host Identifier: Not Supported 00:33:15.077 Non-Operational Permissive Mode: Not Supported 00:33:15.077 NVM Sets: Not Supported 00:33:15.077 Read Recovery Levels: Not Supported 00:33:15.077 Endurance Groups: Not Supported 00:33:15.077 Predictable Latency Mode: Not Supported 00:33:15.077 Traffic Based Keep ALive: Not Supported 00:33:15.077 Namespace Granularity: Not Supported 00:33:15.077 SQ Associations: Not Supported 00:33:15.077 UUID List: Not Supported 00:33:15.077 Multi-Domain Subsystem: Not Supported 00:33:15.077 Fixed Capacity Management: Not Supported 00:33:15.077 Variable Capacity Management: Not Supported 00:33:15.077 Delete Endurance Group: Not Supported 00:33:15.077 Delete NVM Set: Not Supported 00:33:15.077 Extended LBA Formats Supported: Not Supported 00:33:15.077 Flexible Data Placement Supported: Not Supported 00:33:15.077 00:33:15.077 Controller Memory Buffer Support 00:33:15.077 ================================ 00:33:15.077 Supported: No 00:33:15.077 00:33:15.077 Persistent Memory Region Support 00:33:15.077 ================================ 00:33:15.077 Supported: No 00:33:15.077 00:33:15.077 Admin Command Set Attributes 00:33:15.077 ============================ 00:33:15.077 Security Send/Receive: Not Supported 00:33:15.077 Format NVM: Not Supported 00:33:15.077 Firmware Activate/Download: Not Supported 00:33:15.077 Namespace Management: Not Supported 00:33:15.077 Device Self-Test: Not Supported 00:33:15.077 Directives: Not Supported 00:33:15.077 NVMe-MI: Not Supported 00:33:15.077 Virtualization Management: Not Supported 00:33:15.077 Doorbell Buffer Config: Not Supported 00:33:15.077 Get LBA Status Capability: Not Supported 00:33:15.077 Command & Feature Lockdown Capability: Not Supported 00:33:15.077 Abort Command Limit: 1 00:33:15.077 Async Event Request Limit: 4 00:33:15.077 Number of Firmware Slots: N/A 00:33:15.078 Firmware Slot 1 Read-Only: N/A 00:33:15.078 Firmware Activation Without Reset: N/A 00:33:15.078 Multiple Update Detection Support: N/A 00:33:15.078 Firmware Update Granularity: No Information Provided 00:33:15.078 Per-Namespace SMART Log: No 00:33:15.078 Asymmetric Namespace Access Log Page: Not Supported 00:33:15.078 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:15.078 Command Effects Log Page: Not Supported 00:33:15.078 Get Log Page Extended Data: Supported 00:33:15.078 Telemetry Log Pages: Not Supported 00:33:15.078 Persistent Event Log Pages: Not Supported 00:33:15.078 Supported Log Pages Log Page: May Support 00:33:15.078 Commands Supported & Effects Log Page: Not Supported 00:33:15.078 Feature Identifiers & Effects Log Page:May Support 00:33:15.078 NVMe-MI Commands & Effects Log Page: May Support 00:33:15.078 Data Area 4 for Telemetry Log: Not Supported 00:33:15.078 Error Log Page Entries Supported: 128 00:33:15.078 Keep Alive: Not Supported 00:33:15.078 00:33:15.078 NVM Command Set Attributes 00:33:15.078 ========================== 00:33:15.078 Submission Queue Entry Size 00:33:15.078 Max: 1 00:33:15.078 Min: 1 00:33:15.078 Completion Queue Entry Size 00:33:15.078 Max: 1 00:33:15.078 Min: 1 00:33:15.078 Number of Namespaces: 0 00:33:15.078 Compare Command: Not Supported 00:33:15.078 Write Uncorrectable Command: Not Supported 00:33:15.078 Dataset Management Command: Not Supported 00:33:15.078 Write Zeroes Command: Not Supported 00:33:15.078 Set Features Save Field: Not Supported 00:33:15.078 Reservations: Not Supported 00:33:15.078 Timestamp: Not Supported 00:33:15.078 Copy: Not Supported 00:33:15.078 Volatile Write Cache: Not Present 00:33:15.078 Atomic Write Unit (Normal): 1 00:33:15.078 Atomic Write Unit (PFail): 1 00:33:15.078 Atomic Compare & Write Unit: 1 00:33:15.078 Fused Compare & Write: Supported 00:33:15.078 Scatter-Gather List 00:33:15.078 SGL Command Set: Supported 00:33:15.078 SGL Keyed: Supported 00:33:15.078 SGL Bit Bucket Descriptor: Not Supported 00:33:15.078 SGL Metadata Pointer: Not Supported 00:33:15.078 Oversized SGL: Not Supported 00:33:15.078 SGL Metadata Address: Not Supported 00:33:15.078 SGL Offset: Supported 00:33:15.078 Transport SGL Data Block: Not Supported 00:33:15.078 Replay Protected Memory Block: Not Supported 00:33:15.078 00:33:15.078 Firmware Slot Information 00:33:15.078 ========================= 00:33:15.078 Active slot: 0 00:33:15.078 00:33:15.078 00:33:15.078 Error Log 00:33:15.078 ========= 00:33:15.078 00:33:15.078 Active Namespaces 00:33:15.078 ================= 00:33:15.078 Discovery Log Page 00:33:15.078 ================== 00:33:15.078 Generation Counter: 2 00:33:15.078 Number of Records: 2 00:33:15.078 Record Format: 0 00:33:15.078 00:33:15.078 Discovery Log Entry 0 00:33:15.078 ---------------------- 00:33:15.078 Transport Type: 3 (TCP) 00:33:15.078 Address Family: 1 (IPv4) 00:33:15.078 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:15.078 Entry Flags: 00:33:15.078 Duplicate Returned Information: 1 00:33:15.078 Explicit Persistent Connection Support for Discovery: 1 00:33:15.078 Transport Requirements: 00:33:15.078 Secure Channel: Not Required 00:33:15.078 Port ID: 0 (0x0000) 00:33:15.078 Controller ID: 65535 (0xffff) 00:33:15.078 Admin Max SQ Size: 128 00:33:15.078 Transport Service Identifier: 4420 00:33:15.078 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:15.078 Transport Address: 10.0.0.2 00:33:15.078 Discovery Log Entry 1 00:33:15.078 ---------------------- 00:33:15.078 Transport Type: 3 (TCP) 00:33:15.078 Address Family: 1 (IPv4) 00:33:15.078 Subsystem Type: 2 (NVM Subsystem) 00:33:15.078 Entry Flags: 00:33:15.078 Duplicate Returned Information: 0 00:33:15.078 Explicit Persistent Connection Support for Discovery: 0 00:33:15.078 Transport Requirements: 00:33:15.078 Secure Channel: Not Required 00:33:15.078 Port ID: 0 (0x0000) 00:33:15.078 Controller ID: 65535 (0xffff) 00:33:15.078 Admin Max SQ Size: 128 00:33:15.078 Transport Service Identifier: 4420 00:33:15.078 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:33:15.078 Transport Address: 10.0.0.2 [2024-11-20 18:00:14.878459] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:33:15.078 [2024-11-20 18:00:14.878471] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f300) on tqpair=0xc49ad0 00:33:15.078 [2024-11-20 18:00:14.878478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.078 [2024-11-20 18:00:14.878484] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f480) on tqpair=0xc49ad0 00:33:15.078 [2024-11-20 18:00:14.878489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.078 [2024-11-20 18:00:14.878494] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f600) on tqpair=0xc49ad0 00:33:15.078 [2024-11-20 18:00:14.878498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.078 [2024-11-20 18:00:14.878504] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f780) on tqpair=0xc49ad0 00:33:15.078 [2024-11-20 18:00:14.878508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.078 [2024-11-20 18:00:14.878517] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.078 [2024-11-20 18:00:14.878523] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.078 [2024-11-20 18:00:14.878527] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc49ad0) 00:33:15.078 [2024-11-20 18:00:14.878534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.078 [2024-11-20 18:00:14.878548] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f780, cid 3, qid 0 00:33:15.078 [2024-11-20 18:00:14.878666] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.078 [2024-11-20 18:00:14.878672] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.078 [2024-11-20 18:00:14.878676] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.078 [2024-11-20 18:00:14.878680] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f780) on tqpair=0xc49ad0 00:33:15.078 [2024-11-20 18:00:14.878687] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.078 [2024-11-20 18:00:14.878691] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.078 [2024-11-20 18:00:14.878695] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc49ad0) 00:33:15.078 [2024-11-20 18:00:14.878701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.078 [2024-11-20 18:00:14.878716] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f780, cid 3, qid 0 00:33:15.078 [2024-11-20 18:00:14.878968] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.078 [2024-11-20 18:00:14.878974] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.078 [2024-11-20 18:00:14.878978] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.078 [2024-11-20 18:00:14.878984] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f780) on tqpair=0xc49ad0 00:33:15.078 [2024-11-20 18:00:14.878989] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:33:15.078 [2024-11-20 18:00:14.878998] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:33:15.078 [2024-11-20 18:00:14.879008] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.078 [2024-11-20 18:00:14.879012] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.078 [2024-11-20 18:00:14.879016] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc49ad0) 00:33:15.078 [2024-11-20 18:00:14.879022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.078 [2024-11-20 18:00:14.879033] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f780, cid 3, qid 0 00:33:15.078 [2024-11-20 18:00:14.879247] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.078 [2024-11-20 18:00:14.879254] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.078 [2024-11-20 18:00:14.879258] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.078 [2024-11-20 18:00:14.879262] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f780) on tqpair=0xc49ad0 00:33:15.079 [2024-11-20 18:00:14.879272] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.879276] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.879279] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc49ad0) 00:33:15.079 [2024-11-20 18:00:14.879286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.079 [2024-11-20 18:00:14.879297] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f780, cid 3, qid 0 00:33:15.079 [2024-11-20 18:00:14.879471] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.079 [2024-11-20 18:00:14.879477] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.079 [2024-11-20 18:00:14.879481] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.879485] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f780) on tqpair=0xc49ad0 00:33:15.079 [2024-11-20 18:00:14.879494] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.879498] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.879502] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc49ad0) 00:33:15.079 [2024-11-20 18:00:14.879509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.079 [2024-11-20 18:00:14.879519] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f780, cid 3, qid 0 00:33:15.079 [2024-11-20 18:00:14.879723] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.079 [2024-11-20 18:00:14.879729] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.079 [2024-11-20 18:00:14.879732] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.879736] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f780) on tqpair=0xc49ad0 00:33:15.079 [2024-11-20 18:00:14.879746] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.879750] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.879754] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc49ad0) 00:33:15.079 [2024-11-20 18:00:14.879761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.079 [2024-11-20 18:00:14.879771] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f780, cid 3, qid 0 00:33:15.079 [2024-11-20 18:00:14.879973] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.079 [2024-11-20 18:00:14.879982] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.079 [2024-11-20 18:00:14.879986] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.879990] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f780) on tqpair=0xc49ad0 00:33:15.079 [2024-11-20 18:00:14.880000] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.880003] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.880007] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc49ad0) 00:33:15.079 [2024-11-20 18:00:14.880014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.079 [2024-11-20 18:00:14.880024] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f780, cid 3, qid 0 00:33:15.079 [2024-11-20 18:00:14.880221] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.079 [2024-11-20 18:00:14.880227] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.079 [2024-11-20 18:00:14.880231] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.880235] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f780) on tqpair=0xc49ad0 00:33:15.079 [2024-11-20 18:00:14.880244] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.880248] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.880252] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc49ad0) 00:33:15.079 [2024-11-20 18:00:14.880259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.079 [2024-11-20 18:00:14.880269] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f780, cid 3, qid 0 00:33:15.079 [2024-11-20 18:00:14.880479] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.079 [2024-11-20 18:00:14.880486] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.079 [2024-11-20 18:00:14.880489] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.880493] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f780) on tqpair=0xc49ad0 00:33:15.079 [2024-11-20 18:00:14.880503] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.880507] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.880510] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc49ad0) 00:33:15.079 [2024-11-20 18:00:14.880517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.079 [2024-11-20 18:00:14.880527] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f780, cid 3, qid 0 00:33:15.079 [2024-11-20 18:00:14.880731] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.079 [2024-11-20 18:00:14.880737] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.079 [2024-11-20 18:00:14.880740] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.880744] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f780) on tqpair=0xc49ad0 00:33:15.079 [2024-11-20 18:00:14.880754] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.880758] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.880762] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc49ad0) 00:33:15.079 [2024-11-20 18:00:14.880768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.079 [2024-11-20 18:00:14.880778] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f780, cid 3, qid 0 00:33:15.079 [2024-11-20 18:00:14.881033] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.079 [2024-11-20 18:00:14.881040] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.079 [2024-11-20 18:00:14.881047] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.881051] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f780) on tqpair=0xc49ad0 00:33:15.079 [2024-11-20 18:00:14.881062] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.881066] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.881070] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc49ad0) 00:33:15.079 [2024-11-20 18:00:14.881076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.079 [2024-11-20 18:00:14.881087] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9f780, cid 3, qid 0 00:33:15.079 [2024-11-20 18:00:14.885003] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.079 [2024-11-20 18:00:14.885013] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.079 [2024-11-20 18:00:14.885016] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.079 [2024-11-20 18:00:14.885020] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9f780) on tqpair=0xc49ad0 00:33:15.079 [2024-11-20 18:00:14.885028] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:33:15.079 00:33:15.079 18:00:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:33:15.079 [2024-11-20 18:00:14.928832] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:33:15.079 [2024-11-20 18:00:14.928886] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835135 ] 00:33:15.080 [2024-11-20 18:00:14.966171] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:33:15.080 [2024-11-20 18:00:14.966233] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:33:15.080 [2024-11-20 18:00:14.966238] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:33:15.080 [2024-11-20 18:00:14.966255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:33:15.080 [2024-11-20 18:00:14.966266] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:33:15.080 [2024-11-20 18:00:14.966955] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:33:15.080 [2024-11-20 18:00:14.966996] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e5bad0 0 00:33:15.080 [2024-11-20 18:00:14.981184] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:33:15.080 [2024-11-20 18:00:14.981202] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:33:15.080 [2024-11-20 18:00:14.981206] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:33:15.080 [2024-11-20 18:00:14.981210] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:33:15.080 [2024-11-20 18:00:14.981244] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.080 [2024-11-20 18:00:14.981250] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.080 [2024-11-20 18:00:14.981254] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e5bad0) 00:33:15.080 [2024-11-20 18:00:14.981269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:15.080 [2024-11-20 18:00:14.981291] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1300, cid 0, qid 0 00:33:15.347 [2024-11-20 18:00:14.989180] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.347 [2024-11-20 18:00:14.989192] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.347 [2024-11-20 18:00:14.989196] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.347 [2024-11-20 18:00:14.989200] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1300) on tqpair=0x1e5bad0 00:33:15.347 [2024-11-20 18:00:14.989213] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:33:15.347 [2024-11-20 18:00:14.989221] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:33:15.347 [2024-11-20 18:00:14.989227] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:33:15.347 [2024-11-20 18:00:14.989240] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.347 [2024-11-20 18:00:14.989244] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.347 [2024-11-20 18:00:14.989248] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e5bad0) 00:33:15.347 [2024-11-20 18:00:14.989257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.347 [2024-11-20 18:00:14.989272] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1300, cid 0, qid 0 00:33:15.347 [2024-11-20 18:00:14.989475] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.347 [2024-11-20 18:00:14.989482] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.347 [2024-11-20 18:00:14.989485] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.347 [2024-11-20 18:00:14.989489] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1300) on tqpair=0x1e5bad0 00:33:15.347 [2024-11-20 18:00:14.989494] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:33:15.347 [2024-11-20 18:00:14.989502] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:33:15.347 [2024-11-20 18:00:14.989509] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.347 [2024-11-20 18:00:14.989513] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.347 [2024-11-20 18:00:14.989516] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e5bad0) 00:33:15.347 [2024-11-20 18:00:14.989523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.347 [2024-11-20 18:00:14.989534] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1300, cid 0, qid 0 00:33:15.347 [2024-11-20 18:00:14.989690] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.347 [2024-11-20 18:00:14.989696] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.347 [2024-11-20 18:00:14.989700] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.347 [2024-11-20 18:00:14.989704] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1300) on tqpair=0x1e5bad0 00:33:15.347 [2024-11-20 18:00:14.989709] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:33:15.347 [2024-11-20 18:00:14.989717] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:33:15.347 [2024-11-20 18:00:14.989724] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.347 [2024-11-20 18:00:14.989728] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.347 [2024-11-20 18:00:14.989731] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e5bad0) 00:33:15.347 [2024-11-20 18:00:14.989738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.347 [2024-11-20 18:00:14.989748] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1300, cid 0, qid 0 00:33:15.347 [2024-11-20 18:00:14.989915] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.347 [2024-11-20 18:00:14.989925] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.347 [2024-11-20 18:00:14.989928] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.347 [2024-11-20 18:00:14.989932] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1300) on tqpair=0x1e5bad0 00:33:15.347 [2024-11-20 18:00:14.989937] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:33:15.347 [2024-11-20 18:00:14.989947] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.347 [2024-11-20 18:00:14.989953] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.347 [2024-11-20 18:00:14.989957] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e5bad0) 00:33:15.347 [2024-11-20 18:00:14.989963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.347 [2024-11-20 18:00:14.989974] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1300, cid 0, qid 0 00:33:15.347 [2024-11-20 18:00:14.990195] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.347 [2024-11-20 18:00:14.990202] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.347 [2024-11-20 18:00:14.990205] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.347 [2024-11-20 18:00:14.990209] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1300) on tqpair=0x1e5bad0 00:33:15.347 [2024-11-20 18:00:14.990213] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:33:15.347 [2024-11-20 18:00:14.990218] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:33:15.347 [2024-11-20 18:00:14.990226] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:33:15.347 [2024-11-20 18:00:14.990332] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:33:15.348 [2024-11-20 18:00:14.990337] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:33:15.348 [2024-11-20 18:00:14.990345] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.990349] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.990352] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e5bad0) 00:33:15.348 [2024-11-20 18:00:14.990359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.348 [2024-11-20 18:00:14.990369] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1300, cid 0, qid 0 00:33:15.348 [2024-11-20 18:00:14.990468] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.348 [2024-11-20 18:00:14.990474] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.348 [2024-11-20 18:00:14.990477] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.990481] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1300) on tqpair=0x1e5bad0 00:33:15.348 [2024-11-20 18:00:14.990486] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:33:15.348 [2024-11-20 18:00:14.990496] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.990500] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.990503] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e5bad0) 00:33:15.348 [2024-11-20 18:00:14.990510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.348 [2024-11-20 18:00:14.990520] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1300, cid 0, qid 0 00:33:15.348 [2024-11-20 18:00:14.990713] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.348 [2024-11-20 18:00:14.990719] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.348 [2024-11-20 18:00:14.990723] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.990726] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1300) on tqpair=0x1e5bad0 00:33:15.348 [2024-11-20 18:00:14.990731] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:33:15.348 [2024-11-20 18:00:14.990736] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:33:15.348 [2024-11-20 18:00:14.990743] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:33:15.348 [2024-11-20 18:00:14.990756] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:33:15.348 [2024-11-20 18:00:14.990765] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.990769] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e5bad0) 00:33:15.348 [2024-11-20 18:00:14.990776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.348 [2024-11-20 18:00:14.990786] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1300, cid 0, qid 0 00:33:15.348 [2024-11-20 18:00:14.990974] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:15.348 [2024-11-20 18:00:14.990980] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:15.348 [2024-11-20 18:00:14.990984] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.990988] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e5bad0): datao=0, datal=4096, cccid=0 00:33:15.348 [2024-11-20 18:00:14.990993] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eb1300) on tqpair(0x1e5bad0): expected_datao=0, payload_size=4096 00:33:15.348 [2024-11-20 18:00:14.990997] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991016] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991021] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991152] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.348 [2024-11-20 18:00:14.991167] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.348 [2024-11-20 18:00:14.991171] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991175] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1300) on tqpair=0x1e5bad0 00:33:15.348 [2024-11-20 18:00:14.991183] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:33:15.348 [2024-11-20 18:00:14.991188] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:33:15.348 [2024-11-20 18:00:14.991192] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:33:15.348 [2024-11-20 18:00:14.991197] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:33:15.348 [2024-11-20 18:00:14.991201] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:33:15.348 [2024-11-20 18:00:14.991206] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:33:15.348 [2024-11-20 18:00:14.991214] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:33:15.348 [2024-11-20 18:00:14.991221] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991225] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991231] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e5bad0) 00:33:15.348 [2024-11-20 18:00:14.991238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:15.348 [2024-11-20 18:00:14.991249] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1300, cid 0, qid 0 00:33:15.348 [2024-11-20 18:00:14.991470] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.348 [2024-11-20 18:00:14.991477] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.348 [2024-11-20 18:00:14.991480] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991484] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1300) on tqpair=0x1e5bad0 00:33:15.348 [2024-11-20 18:00:14.991491] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991495] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991498] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e5bad0) 00:33:15.348 [2024-11-20 18:00:14.991504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.348 [2024-11-20 18:00:14.991511] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991514] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991518] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e5bad0) 00:33:15.348 [2024-11-20 18:00:14.991524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.348 [2024-11-20 18:00:14.991530] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991534] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991537] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e5bad0) 00:33:15.348 [2024-11-20 18:00:14.991543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.348 [2024-11-20 18:00:14.991549] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991553] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991556] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.348 [2024-11-20 18:00:14.991562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.348 [2024-11-20 18:00:14.991567] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:33:15.348 [2024-11-20 18:00:14.991578] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:33:15.348 [2024-11-20 18:00:14.991585] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991589] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e5bad0) 00:33:15.348 [2024-11-20 18:00:14.991595] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.348 [2024-11-20 18:00:14.991608] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1300, cid 0, qid 0 00:33:15.348 [2024-11-20 18:00:14.991613] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1480, cid 1, qid 0 00:33:15.348 [2024-11-20 18:00:14.991618] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1600, cid 2, qid 0 00:33:15.348 [2024-11-20 18:00:14.991623] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.348 [2024-11-20 18:00:14.991627] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1900, cid 4, qid 0 00:33:15.348 [2024-11-20 18:00:14.991772] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.348 [2024-11-20 18:00:14.991781] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.348 [2024-11-20 18:00:14.991785] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991789] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1900) on tqpair=0x1e5bad0 00:33:15.348 [2024-11-20 18:00:14.991793] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:33:15.348 [2024-11-20 18:00:14.991798] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:33:15.348 [2024-11-20 18:00:14.991807] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:33:15.348 [2024-11-20 18:00:14.991816] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:33:15.348 [2024-11-20 18:00:14.991822] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991826] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.348 [2024-11-20 18:00:14.991830] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e5bad0) 00:33:15.348 [2024-11-20 18:00:14.991836] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:15.349 [2024-11-20 18:00:14.991847] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1900, cid 4, qid 0 00:33:15.349 [2024-11-20 18:00:14.991964] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.349 [2024-11-20 18:00:14.991970] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.349 [2024-11-20 18:00:14.991973] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:14.991977] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1900) on tqpair=0x1e5bad0 00:33:15.349 [2024-11-20 18:00:14.992042] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:33:15.349 [2024-11-20 18:00:14.992053] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:33:15.349 [2024-11-20 18:00:14.992060] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:14.992064] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e5bad0) 00:33:15.349 [2024-11-20 18:00:14.992071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.349 [2024-11-20 18:00:14.992082] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1900, cid 4, qid 0 00:33:15.349 [2024-11-20 18:00:14.992363] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:15.349 [2024-11-20 18:00:14.992370] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:15.349 [2024-11-20 18:00:14.992374] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:14.992377] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e5bad0): datao=0, datal=4096, cccid=4 00:33:15.349 [2024-11-20 18:00:14.992382] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eb1900) on tqpair(0x1e5bad0): expected_datao=0, payload_size=4096 00:33:15.349 [2024-11-20 18:00:14.992386] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:14.992400] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:14.992405] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.037173] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.349 [2024-11-20 18:00:15.037185] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.349 [2024-11-20 18:00:15.037189] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.037193] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1900) on tqpair=0x1e5bad0 00:33:15.349 [2024-11-20 18:00:15.037209] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:33:15.349 [2024-11-20 18:00:15.037224] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:33:15.349 [2024-11-20 18:00:15.037234] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:33:15.349 [2024-11-20 18:00:15.037241] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.037245] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e5bad0) 00:33:15.349 [2024-11-20 18:00:15.037252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.349 [2024-11-20 18:00:15.037266] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1900, cid 4, qid 0 00:33:15.349 [2024-11-20 18:00:15.037473] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:15.349 [2024-11-20 18:00:15.037479] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:15.349 [2024-11-20 18:00:15.037483] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.037487] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e5bad0): datao=0, datal=4096, cccid=4 00:33:15.349 [2024-11-20 18:00:15.037491] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eb1900) on tqpair(0x1e5bad0): expected_datao=0, payload_size=4096 00:33:15.349 [2024-11-20 18:00:15.037496] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.037524] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.037529] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.078315] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.349 [2024-11-20 18:00:15.078326] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.349 [2024-11-20 18:00:15.078330] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.078334] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1900) on tqpair=0x1e5bad0 00:33:15.349 [2024-11-20 18:00:15.078350] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:33:15.349 [2024-11-20 18:00:15.078360] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:33:15.349 [2024-11-20 18:00:15.078368] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.078372] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e5bad0) 00:33:15.349 [2024-11-20 18:00:15.078379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.349 [2024-11-20 18:00:15.078392] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1900, cid 4, qid 0 00:33:15.349 [2024-11-20 18:00:15.078629] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:15.349 [2024-11-20 18:00:15.078636] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:15.349 [2024-11-20 18:00:15.078640] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.078643] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e5bad0): datao=0, datal=4096, cccid=4 00:33:15.349 [2024-11-20 18:00:15.078648] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eb1900) on tqpair(0x1e5bad0): expected_datao=0, payload_size=4096 00:33:15.349 [2024-11-20 18:00:15.078652] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.078666] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.078670] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.119306] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.349 [2024-11-20 18:00:15.119317] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.349 [2024-11-20 18:00:15.119321] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.119325] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1900) on tqpair=0x1e5bad0 00:33:15.349 [2024-11-20 18:00:15.119334] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:33:15.349 [2024-11-20 18:00:15.119342] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:33:15.349 [2024-11-20 18:00:15.119351] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:33:15.349 [2024-11-20 18:00:15.119357] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:33:15.349 [2024-11-20 18:00:15.119363] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:33:15.349 [2024-11-20 18:00:15.119368] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:33:15.349 [2024-11-20 18:00:15.119373] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:33:15.349 [2024-11-20 18:00:15.119378] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:33:15.349 [2024-11-20 18:00:15.119383] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:33:15.349 [2024-11-20 18:00:15.119402] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.119405] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e5bad0) 00:33:15.349 [2024-11-20 18:00:15.119413] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.349 [2024-11-20 18:00:15.119420] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.119424] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.119427] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e5bad0) 00:33:15.349 [2024-11-20 18:00:15.119433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.349 [2024-11-20 18:00:15.119447] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1900, cid 4, qid 0 00:33:15.349 [2024-11-20 18:00:15.119452] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1a80, cid 5, qid 0 00:33:15.349 [2024-11-20 18:00:15.119667] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.349 [2024-11-20 18:00:15.119673] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.349 [2024-11-20 18:00:15.119677] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.119681] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1900) on tqpair=0x1e5bad0 00:33:15.349 [2024-11-20 18:00:15.119687] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.349 [2024-11-20 18:00:15.119693] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.349 [2024-11-20 18:00:15.119696] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.119700] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1a80) on tqpair=0x1e5bad0 00:33:15.349 [2024-11-20 18:00:15.119710] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.119714] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e5bad0) 00:33:15.349 [2024-11-20 18:00:15.119720] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.349 [2024-11-20 18:00:15.119734] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1a80, cid 5, qid 0 00:33:15.349 [2024-11-20 18:00:15.119866] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.349 [2024-11-20 18:00:15.119874] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.349 [2024-11-20 18:00:15.119878] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.119881] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1a80) on tqpair=0x1e5bad0 00:33:15.349 [2024-11-20 18:00:15.119891] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.349 [2024-11-20 18:00:15.119894] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e5bad0) 00:33:15.350 [2024-11-20 18:00:15.119901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.350 [2024-11-20 18:00:15.119911] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1a80, cid 5, qid 0 00:33:15.350 [2024-11-20 18:00:15.120082] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.350 [2024-11-20 18:00:15.120088] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.350 [2024-11-20 18:00:15.120091] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120095] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1a80) on tqpair=0x1e5bad0 00:33:15.350 [2024-11-20 18:00:15.120104] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120108] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e5bad0) 00:33:15.350 [2024-11-20 18:00:15.120115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.350 [2024-11-20 18:00:15.120126] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1a80, cid 5, qid 0 00:33:15.350 [2024-11-20 18:00:15.120300] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.350 [2024-11-20 18:00:15.120307] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.350 [2024-11-20 18:00:15.120310] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120314] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1a80) on tqpair=0x1e5bad0 00:33:15.350 [2024-11-20 18:00:15.120330] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120334] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e5bad0) 00:33:15.350 [2024-11-20 18:00:15.120341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.350 [2024-11-20 18:00:15.120348] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120352] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e5bad0) 00:33:15.350 [2024-11-20 18:00:15.120358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.350 [2024-11-20 18:00:15.120365] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120369] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1e5bad0) 00:33:15.350 [2024-11-20 18:00:15.120375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.350 [2024-11-20 18:00:15.120385] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120389] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e5bad0) 00:33:15.350 [2024-11-20 18:00:15.120395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.350 [2024-11-20 18:00:15.120409] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1a80, cid 5, qid 0 00:33:15.350 [2024-11-20 18:00:15.120415] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1900, cid 4, qid 0 00:33:15.350 [2024-11-20 18:00:15.120420] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1c00, cid 6, qid 0 00:33:15.350 [2024-11-20 18:00:15.120424] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1d80, cid 7, qid 0 00:33:15.350 [2024-11-20 18:00:15.120634] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:15.350 [2024-11-20 18:00:15.120640] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:15.350 [2024-11-20 18:00:15.120643] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120647] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e5bad0): datao=0, datal=8192, cccid=5 00:33:15.350 [2024-11-20 18:00:15.120651] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eb1a80) on tqpair(0x1e5bad0): expected_datao=0, payload_size=8192 00:33:15.350 [2024-11-20 18:00:15.120656] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120758] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120763] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120768] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:15.350 [2024-11-20 18:00:15.120774] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:15.350 [2024-11-20 18:00:15.120777] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120781] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e5bad0): datao=0, datal=512, cccid=4 00:33:15.350 [2024-11-20 18:00:15.120786] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eb1900) on tqpair(0x1e5bad0): expected_datao=0, payload_size=512 00:33:15.350 [2024-11-20 18:00:15.120790] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120810] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120814] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120819] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:15.350 [2024-11-20 18:00:15.120825] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:15.350 [2024-11-20 18:00:15.120829] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120832] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e5bad0): datao=0, datal=512, cccid=6 00:33:15.350 [2024-11-20 18:00:15.120836] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eb1c00) on tqpair(0x1e5bad0): expected_datao=0, payload_size=512 00:33:15.350 [2024-11-20 18:00:15.120841] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120847] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120851] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120856] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:15.350 [2024-11-20 18:00:15.120862] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:15.350 [2024-11-20 18:00:15.120865] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120869] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e5bad0): datao=0, datal=4096, cccid=7 00:33:15.350 [2024-11-20 18:00:15.120873] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eb1d80) on tqpair(0x1e5bad0): expected_datao=0, payload_size=4096 00:33:15.350 [2024-11-20 18:00:15.120878] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120885] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.120888] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.121023] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.350 [2024-11-20 18:00:15.121031] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.350 [2024-11-20 18:00:15.121034] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.121038] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1a80) on tqpair=0x1e5bad0 00:33:15.350 [2024-11-20 18:00:15.121051] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.350 [2024-11-20 18:00:15.121057] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.350 [2024-11-20 18:00:15.121060] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.121064] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1900) on tqpair=0x1e5bad0 00:33:15.350 [2024-11-20 18:00:15.121075] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.350 [2024-11-20 18:00:15.121081] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.350 [2024-11-20 18:00:15.121084] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.121088] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1c00) on tqpair=0x1e5bad0 00:33:15.350 [2024-11-20 18:00:15.121095] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.350 [2024-11-20 18:00:15.121101] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.350 [2024-11-20 18:00:15.121104] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.350 [2024-11-20 18:00:15.121108] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1d80) on tqpair=0x1e5bad0 00:33:15.350 ===================================================== 00:33:15.350 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:15.350 ===================================================== 00:33:15.350 Controller Capabilities/Features 00:33:15.350 ================================ 00:33:15.350 Vendor ID: 8086 00:33:15.350 Subsystem Vendor ID: 8086 00:33:15.350 Serial Number: SPDK00000000000001 00:33:15.350 Model Number: SPDK bdev Controller 00:33:15.350 Firmware Version: 24.09.1 00:33:15.350 Recommended Arb Burst: 6 00:33:15.350 IEEE OUI Identifier: e4 d2 5c 00:33:15.350 Multi-path I/O 00:33:15.350 May have multiple subsystem ports: Yes 00:33:15.350 May have multiple controllers: Yes 00:33:15.350 Associated with SR-IOV VF: No 00:33:15.350 Max Data Transfer Size: 131072 00:33:15.350 Max Number of Namespaces: 32 00:33:15.350 Max Number of I/O Queues: 127 00:33:15.350 NVMe Specification Version (VS): 1.3 00:33:15.350 NVMe Specification Version (Identify): 1.3 00:33:15.350 Maximum Queue Entries: 128 00:33:15.350 Contiguous Queues Required: Yes 00:33:15.350 Arbitration Mechanisms Supported 00:33:15.350 Weighted Round Robin: Not Supported 00:33:15.350 Vendor Specific: Not Supported 00:33:15.350 Reset Timeout: 15000 ms 00:33:15.350 Doorbell Stride: 4 bytes 00:33:15.350 NVM Subsystem Reset: Not Supported 00:33:15.350 Command Sets Supported 00:33:15.350 NVM Command Set: Supported 00:33:15.350 Boot Partition: Not Supported 00:33:15.350 Memory Page Size Minimum: 4096 bytes 00:33:15.350 Memory Page Size Maximum: 4096 bytes 00:33:15.350 Persistent Memory Region: Not Supported 00:33:15.350 Optional Asynchronous Events Supported 00:33:15.350 Namespace Attribute Notices: Supported 00:33:15.350 Firmware Activation Notices: Not Supported 00:33:15.350 ANA Change Notices: Not Supported 00:33:15.350 PLE Aggregate Log Change Notices: Not Supported 00:33:15.350 LBA Status Info Alert Notices: Not Supported 00:33:15.350 EGE Aggregate Log Change Notices: Not Supported 00:33:15.350 Normal NVM Subsystem Shutdown event: Not Supported 00:33:15.350 Zone Descriptor Change Notices: Not Supported 00:33:15.350 Discovery Log Change Notices: Not Supported 00:33:15.350 Controller Attributes 00:33:15.350 128-bit Host Identifier: Supported 00:33:15.350 Non-Operational Permissive Mode: Not Supported 00:33:15.350 NVM Sets: Not Supported 00:33:15.351 Read Recovery Levels: Not Supported 00:33:15.351 Endurance Groups: Not Supported 00:33:15.351 Predictable Latency Mode: Not Supported 00:33:15.351 Traffic Based Keep ALive: Not Supported 00:33:15.351 Namespace Granularity: Not Supported 00:33:15.351 SQ Associations: Not Supported 00:33:15.351 UUID List: Not Supported 00:33:15.351 Multi-Domain Subsystem: Not Supported 00:33:15.351 Fixed Capacity Management: Not Supported 00:33:15.351 Variable Capacity Management: Not Supported 00:33:15.351 Delete Endurance Group: Not Supported 00:33:15.351 Delete NVM Set: Not Supported 00:33:15.351 Extended LBA Formats Supported: Not Supported 00:33:15.351 Flexible Data Placement Supported: Not Supported 00:33:15.351 00:33:15.351 Controller Memory Buffer Support 00:33:15.351 ================================ 00:33:15.351 Supported: No 00:33:15.351 00:33:15.351 Persistent Memory Region Support 00:33:15.351 ================================ 00:33:15.351 Supported: No 00:33:15.351 00:33:15.351 Admin Command Set Attributes 00:33:15.351 ============================ 00:33:15.351 Security Send/Receive: Not Supported 00:33:15.351 Format NVM: Not Supported 00:33:15.351 Firmware Activate/Download: Not Supported 00:33:15.351 Namespace Management: Not Supported 00:33:15.351 Device Self-Test: Not Supported 00:33:15.351 Directives: Not Supported 00:33:15.351 NVMe-MI: Not Supported 00:33:15.351 Virtualization Management: Not Supported 00:33:15.351 Doorbell Buffer Config: Not Supported 00:33:15.351 Get LBA Status Capability: Not Supported 00:33:15.351 Command & Feature Lockdown Capability: Not Supported 00:33:15.351 Abort Command Limit: 4 00:33:15.351 Async Event Request Limit: 4 00:33:15.351 Number of Firmware Slots: N/A 00:33:15.351 Firmware Slot 1 Read-Only: N/A 00:33:15.351 Firmware Activation Without Reset: N/A 00:33:15.351 Multiple Update Detection Support: N/A 00:33:15.351 Firmware Update Granularity: No Information Provided 00:33:15.351 Per-Namespace SMART Log: No 00:33:15.351 Asymmetric Namespace Access Log Page: Not Supported 00:33:15.351 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:33:15.351 Command Effects Log Page: Supported 00:33:15.351 Get Log Page Extended Data: Supported 00:33:15.351 Telemetry Log Pages: Not Supported 00:33:15.351 Persistent Event Log Pages: Not Supported 00:33:15.351 Supported Log Pages Log Page: May Support 00:33:15.351 Commands Supported & Effects Log Page: Not Supported 00:33:15.351 Feature Identifiers & Effects Log Page:May Support 00:33:15.351 NVMe-MI Commands & Effects Log Page: May Support 00:33:15.351 Data Area 4 for Telemetry Log: Not Supported 00:33:15.351 Error Log Page Entries Supported: 128 00:33:15.351 Keep Alive: Supported 00:33:15.351 Keep Alive Granularity: 10000 ms 00:33:15.351 00:33:15.351 NVM Command Set Attributes 00:33:15.351 ========================== 00:33:15.351 Submission Queue Entry Size 00:33:15.351 Max: 64 00:33:15.351 Min: 64 00:33:15.351 Completion Queue Entry Size 00:33:15.351 Max: 16 00:33:15.351 Min: 16 00:33:15.351 Number of Namespaces: 32 00:33:15.351 Compare Command: Supported 00:33:15.351 Write Uncorrectable Command: Not Supported 00:33:15.351 Dataset Management Command: Supported 00:33:15.351 Write Zeroes Command: Supported 00:33:15.351 Set Features Save Field: Not Supported 00:33:15.351 Reservations: Supported 00:33:15.351 Timestamp: Not Supported 00:33:15.351 Copy: Supported 00:33:15.351 Volatile Write Cache: Present 00:33:15.351 Atomic Write Unit (Normal): 1 00:33:15.351 Atomic Write Unit (PFail): 1 00:33:15.351 Atomic Compare & Write Unit: 1 00:33:15.351 Fused Compare & Write: Supported 00:33:15.351 Scatter-Gather List 00:33:15.351 SGL Command Set: Supported 00:33:15.351 SGL Keyed: Supported 00:33:15.351 SGL Bit Bucket Descriptor: Not Supported 00:33:15.351 SGL Metadata Pointer: Not Supported 00:33:15.351 Oversized SGL: Not Supported 00:33:15.351 SGL Metadata Address: Not Supported 00:33:15.351 SGL Offset: Supported 00:33:15.351 Transport SGL Data Block: Not Supported 00:33:15.351 Replay Protected Memory Block: Not Supported 00:33:15.351 00:33:15.351 Firmware Slot Information 00:33:15.351 ========================= 00:33:15.351 Active slot: 1 00:33:15.351 Slot 1 Firmware Revision: 24.09.1 00:33:15.351 00:33:15.351 00:33:15.351 Commands Supported and Effects 00:33:15.351 ============================== 00:33:15.351 Admin Commands 00:33:15.351 -------------- 00:33:15.351 Get Log Page (02h): Supported 00:33:15.351 Identify (06h): Supported 00:33:15.351 Abort (08h): Supported 00:33:15.351 Set Features (09h): Supported 00:33:15.351 Get Features (0Ah): Supported 00:33:15.351 Asynchronous Event Request (0Ch): Supported 00:33:15.351 Keep Alive (18h): Supported 00:33:15.351 I/O Commands 00:33:15.351 ------------ 00:33:15.351 Flush (00h): Supported LBA-Change 00:33:15.351 Write (01h): Supported LBA-Change 00:33:15.351 Read (02h): Supported 00:33:15.351 Compare (05h): Supported 00:33:15.351 Write Zeroes (08h): Supported LBA-Change 00:33:15.351 Dataset Management (09h): Supported LBA-Change 00:33:15.351 Copy (19h): Supported LBA-Change 00:33:15.351 00:33:15.351 Error Log 00:33:15.351 ========= 00:33:15.351 00:33:15.351 Arbitration 00:33:15.351 =========== 00:33:15.351 Arbitration Burst: 1 00:33:15.351 00:33:15.351 Power Management 00:33:15.351 ================ 00:33:15.351 Number of Power States: 1 00:33:15.351 Current Power State: Power State #0 00:33:15.351 Power State #0: 00:33:15.351 Max Power: 0.00 W 00:33:15.351 Non-Operational State: Operational 00:33:15.351 Entry Latency: Not Reported 00:33:15.351 Exit Latency: Not Reported 00:33:15.351 Relative Read Throughput: 0 00:33:15.351 Relative Read Latency: 0 00:33:15.351 Relative Write Throughput: 0 00:33:15.351 Relative Write Latency: 0 00:33:15.351 Idle Power: Not Reported 00:33:15.351 Active Power: Not Reported 00:33:15.351 Non-Operational Permissive Mode: Not Supported 00:33:15.351 00:33:15.351 Health Information 00:33:15.351 ================== 00:33:15.351 Critical Warnings: 00:33:15.351 Available Spare Space: OK 00:33:15.351 Temperature: OK 00:33:15.351 Device Reliability: OK 00:33:15.351 Read Only: No 00:33:15.351 Volatile Memory Backup: OK 00:33:15.351 Current Temperature: 0 Kelvin (-273 Celsius) 00:33:15.351 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:33:15.351 Available Spare: 0% 00:33:15.351 Available Spare Threshold: 0% 00:33:15.351 Life Percentage U[2024-11-20 18:00:15.125228] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.351 [2024-11-20 18:00:15.125235] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e5bad0) 00:33:15.351 [2024-11-20 18:00:15.125242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.351 [2024-11-20 18:00:15.125256] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1d80, cid 7, qid 0 00:33:15.351 [2024-11-20 18:00:15.125446] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.351 [2024-11-20 18:00:15.125452] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.351 [2024-11-20 18:00:15.125456] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.351 [2024-11-20 18:00:15.125459] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1d80) on tqpair=0x1e5bad0 00:33:15.351 [2024-11-20 18:00:15.125493] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:33:15.351 [2024-11-20 18:00:15.125503] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1300) on tqpair=0x1e5bad0 00:33:15.351 [2024-11-20 18:00:15.125509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.351 [2024-11-20 18:00:15.125515] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1480) on tqpair=0x1e5bad0 00:33:15.351 [2024-11-20 18:00:15.125520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.351 [2024-11-20 18:00:15.125525] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1600) on tqpair=0x1e5bad0 00:33:15.351 [2024-11-20 18:00:15.125529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.351 [2024-11-20 18:00:15.125535] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.351 [2024-11-20 18:00:15.125539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.351 [2024-11-20 18:00:15.125548] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.351 [2024-11-20 18:00:15.125552] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.351 [2024-11-20 18:00:15.125555] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.351 [2024-11-20 18:00:15.125562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.351 [2024-11-20 18:00:15.125577] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.351 [2024-11-20 18:00:15.125678] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.351 [2024-11-20 18:00:15.125684] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.351 [2024-11-20 18:00:15.125688] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.125692] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.352 [2024-11-20 18:00:15.125699] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.125703] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.125706] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.352 [2024-11-20 18:00:15.125713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.352 [2024-11-20 18:00:15.125726] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.352 [2024-11-20 18:00:15.125951] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.352 [2024-11-20 18:00:15.125957] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.352 [2024-11-20 18:00:15.125961] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.125964] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.352 [2024-11-20 18:00:15.125969] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:33:15.352 [2024-11-20 18:00:15.125974] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:33:15.352 [2024-11-20 18:00:15.125983] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.125987] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.125990] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.352 [2024-11-20 18:00:15.125997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.352 [2024-11-20 18:00:15.126008] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.352 [2024-11-20 18:00:15.126148] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.352 [2024-11-20 18:00:15.126155] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.352 [2024-11-20 18:00:15.126169] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.126173] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.352 [2024-11-20 18:00:15.126183] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.126187] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.126191] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.352 [2024-11-20 18:00:15.126197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.352 [2024-11-20 18:00:15.126208] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.352 [2024-11-20 18:00:15.126391] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.352 [2024-11-20 18:00:15.126397] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.352 [2024-11-20 18:00:15.126401] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.126405] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.352 [2024-11-20 18:00:15.126415] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.126418] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.126424] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.352 [2024-11-20 18:00:15.126432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.352 [2024-11-20 18:00:15.126443] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.352 [2024-11-20 18:00:15.126606] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.352 [2024-11-20 18:00:15.126612] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.352 [2024-11-20 18:00:15.126616] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.126620] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.352 [2024-11-20 18:00:15.126629] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.126633] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.126637] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.352 [2024-11-20 18:00:15.126643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.352 [2024-11-20 18:00:15.126654] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.352 [2024-11-20 18:00:15.126828] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.352 [2024-11-20 18:00:15.126834] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.352 [2024-11-20 18:00:15.126838] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.126841] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.352 [2024-11-20 18:00:15.126851] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.126855] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.126858] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.352 [2024-11-20 18:00:15.126865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.352 [2024-11-20 18:00:15.126875] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.352 [2024-11-20 18:00:15.127050] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.352 [2024-11-20 18:00:15.127056] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.352 [2024-11-20 18:00:15.127059] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.127063] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.352 [2024-11-20 18:00:15.127074] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.127078] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.127081] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.352 [2024-11-20 18:00:15.127088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.352 [2024-11-20 18:00:15.127098] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.352 [2024-11-20 18:00:15.127285] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.352 [2024-11-20 18:00:15.127292] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.352 [2024-11-20 18:00:15.127295] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.127299] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.352 [2024-11-20 18:00:15.127309] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.127313] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.127316] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.352 [2024-11-20 18:00:15.127325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.352 [2024-11-20 18:00:15.127336] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.352 [2024-11-20 18:00:15.127475] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.352 [2024-11-20 18:00:15.127482] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.352 [2024-11-20 18:00:15.127485] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.127489] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.352 [2024-11-20 18:00:15.127499] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.127503] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.127506] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.352 [2024-11-20 18:00:15.127513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.352 [2024-11-20 18:00:15.127524] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.352 [2024-11-20 18:00:15.127679] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.352 [2024-11-20 18:00:15.127685] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.352 [2024-11-20 18:00:15.127688] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.127692] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.352 [2024-11-20 18:00:15.127702] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.127706] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.352 [2024-11-20 18:00:15.127709] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.352 [2024-11-20 18:00:15.127716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.353 [2024-11-20 18:00:15.127727] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.353 [2024-11-20 18:00:15.127867] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.353 [2024-11-20 18:00:15.127873] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.353 [2024-11-20 18:00:15.127877] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.127881] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.353 [2024-11-20 18:00:15.127890] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.127894] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.127898] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.353 [2024-11-20 18:00:15.127904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.353 [2024-11-20 18:00:15.127915] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.353 [2024-11-20 18:00:15.128083] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.353 [2024-11-20 18:00:15.128089] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.353 [2024-11-20 18:00:15.128093] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.128096] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.353 [2024-11-20 18:00:15.128106] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.128110] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.128114] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.353 [2024-11-20 18:00:15.128120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.353 [2024-11-20 18:00:15.128133] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.353 [2024-11-20 18:00:15.128357] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.353 [2024-11-20 18:00:15.128364] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.353 [2024-11-20 18:00:15.128367] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.128371] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.353 [2024-11-20 18:00:15.128381] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.128385] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.128389] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.353 [2024-11-20 18:00:15.128395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.353 [2024-11-20 18:00:15.128406] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.353 [2024-11-20 18:00:15.128557] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.353 [2024-11-20 18:00:15.128563] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.353 [2024-11-20 18:00:15.128566] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.128570] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.353 [2024-11-20 18:00:15.128581] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.128584] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.128588] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.353 [2024-11-20 18:00:15.128595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.353 [2024-11-20 18:00:15.128605] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.353 [2024-11-20 18:00:15.128776] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.353 [2024-11-20 18:00:15.128784] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.353 [2024-11-20 18:00:15.128788] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.128791] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.353 [2024-11-20 18:00:15.128801] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.128806] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.128809] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.353 [2024-11-20 18:00:15.128816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.353 [2024-11-20 18:00:15.128826] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.353 [2024-11-20 18:00:15.129018] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.353 [2024-11-20 18:00:15.129024] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.353 [2024-11-20 18:00:15.129028] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.129032] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.353 [2024-11-20 18:00:15.129041] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.129045] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.129049] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e5bad0) 00:33:15.353 [2024-11-20 18:00:15.129055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.353 [2024-11-20 18:00:15.129068] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eb1780, cid 3, qid 0 00:33:15.353 [2024-11-20 18:00:15.133174] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:15.353 [2024-11-20 18:00:15.133183] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:15.353 [2024-11-20 18:00:15.133186] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:15.353 [2024-11-20 18:00:15.133190] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eb1780) on tqpair=0x1e5bad0 00:33:15.353 [2024-11-20 18:00:15.133199] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:33:15.353 sed: 0% 00:33:15.353 Data Units Read: 0 00:33:15.353 Data Units Written: 0 00:33:15.353 Host Read Commands: 0 00:33:15.353 Host Write Commands: 0 00:33:15.353 Controller Busy Time: 0 minutes 00:33:15.353 Power Cycles: 0 00:33:15.353 Power On Hours: 0 hours 00:33:15.353 Unsafe Shutdowns: 0 00:33:15.353 Unrecoverable Media Errors: 0 00:33:15.353 Lifetime Error Log Entries: 0 00:33:15.353 Warning Temperature Time: 0 minutes 00:33:15.353 Critical Temperature Time: 0 minutes 00:33:15.353 00:33:15.353 Number of Queues 00:33:15.353 ================ 00:33:15.353 Number of I/O Submission Queues: 127 00:33:15.353 Number of I/O Completion Queues: 127 00:33:15.353 00:33:15.353 Active Namespaces 00:33:15.353 ================= 00:33:15.353 Namespace ID:1 00:33:15.353 Error Recovery Timeout: Unlimited 00:33:15.353 Command Set Identifier: NVM (00h) 00:33:15.353 Deallocate: Supported 00:33:15.353 Deallocated/Unwritten Error: Not Supported 00:33:15.353 Deallocated Read Value: Unknown 00:33:15.353 Deallocate in Write Zeroes: Not Supported 00:33:15.353 Deallocated Guard Field: 0xFFFF 00:33:15.353 Flush: Supported 00:33:15.353 Reservation: Supported 00:33:15.353 Namespace Sharing Capabilities: Multiple Controllers 00:33:15.353 Size (in LBAs): 131072 (0GiB) 00:33:15.353 Capacity (in LBAs): 131072 (0GiB) 00:33:15.353 Utilization (in LBAs): 131072 (0GiB) 00:33:15.353 NGUID: ABCDEF0123456789ABCDEF0123456789 00:33:15.353 EUI64: ABCDEF0123456789 00:33:15.353 UUID: 7eca3669-ad8f-4886-8cdc-6048dfaff646 00:33:15.353 Thin Provisioning: Not Supported 00:33:15.353 Per-NS Atomic Units: Yes 00:33:15.353 Atomic Boundary Size (Normal): 0 00:33:15.353 Atomic Boundary Size (PFail): 0 00:33:15.353 Atomic Boundary Offset: 0 00:33:15.353 Maximum Single Source Range Length: 65535 00:33:15.353 Maximum Copy Length: 65535 00:33:15.353 Maximum Source Range Count: 1 00:33:15.353 NGUID/EUI64 Never Reused: No 00:33:15.353 Namespace Write Protected: No 00:33:15.353 Number of LBA Formats: 1 00:33:15.353 Current LBA Format: LBA Format #00 00:33:15.353 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:15.353 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:15.353 rmmod nvme_tcp 00:33:15.353 rmmod nvme_fabrics 00:33:15.353 rmmod nvme_keyring 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 2834943 ']' 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 2834943 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2834943 ']' 00:33:15.353 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2834943 00:33:15.354 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:33:15.354 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:15.354 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2834943 00:33:15.616 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:15.616 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:15.616 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2834943' 00:33:15.616 killing process with pid 2834943 00:33:15.616 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2834943 00:33:15.616 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2834943 00:33:15.616 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:15.616 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:15.616 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:15.616 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:33:15.616 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:33:15.616 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:15.616 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:33:15.616 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:15.616 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:15.616 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.616 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:15.616 18:00:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:18.160 00:33:18.160 real 0m11.699s 00:33:18.160 user 0m8.678s 00:33:18.160 sys 0m6.163s 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:18.160 ************************************ 00:33:18.160 END TEST nvmf_identify 00:33:18.160 ************************************ 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.160 ************************************ 00:33:18.160 START TEST nvmf_perf 00:33:18.160 ************************************ 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:33:18.160 * Looking for test storage... 00:33:18.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:18.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.160 --rc genhtml_branch_coverage=1 00:33:18.160 --rc genhtml_function_coverage=1 00:33:18.160 --rc genhtml_legend=1 00:33:18.160 --rc geninfo_all_blocks=1 00:33:18.160 --rc geninfo_unexecuted_blocks=1 00:33:18.160 00:33:18.160 ' 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:18.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.160 --rc genhtml_branch_coverage=1 00:33:18.160 --rc genhtml_function_coverage=1 00:33:18.160 --rc genhtml_legend=1 00:33:18.160 --rc geninfo_all_blocks=1 00:33:18.160 --rc geninfo_unexecuted_blocks=1 00:33:18.160 00:33:18.160 ' 00:33:18.160 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:18.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.160 --rc genhtml_branch_coverage=1 00:33:18.160 --rc genhtml_function_coverage=1 00:33:18.160 --rc genhtml_legend=1 00:33:18.160 --rc geninfo_all_blocks=1 00:33:18.160 --rc geninfo_unexecuted_blocks=1 00:33:18.160 00:33:18.161 ' 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:18.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.161 --rc genhtml_branch_coverage=1 00:33:18.161 --rc genhtml_function_coverage=1 00:33:18.161 --rc genhtml_legend=1 00:33:18.161 --rc geninfo_all_blocks=1 00:33:18.161 --rc geninfo_unexecuted_blocks=1 00:33:18.161 00:33:18.161 ' 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:18.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:33:18.161 18:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:26.299 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:26.299 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:26.299 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:26.300 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:26.300 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # is_hw=yes 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:26.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:26.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:33:26.300 00:33:26.300 --- 10.0.0.2 ping statistics --- 00:33:26.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.300 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:26.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:26.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:33:26.300 00:33:26.300 --- 10.0.0.1 ping statistics --- 00:33:26.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.300 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # return 0 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=2839275 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 2839275 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2839275 ']' 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:26.300 18:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:26.300 [2024-11-20 18:00:25.525309] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:33:26.300 [2024-11-20 18:00:25.525374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.300 [2024-11-20 18:00:25.618295] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:26.300 [2024-11-20 18:00:25.666904] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:26.300 [2024-11-20 18:00:25.666959] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:26.300 [2024-11-20 18:00:25.666968] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:26.300 [2024-11-20 18:00:25.666975] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:26.300 [2024-11-20 18:00:25.666981] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:26.300 [2024-11-20 18:00:25.667144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.300 [2024-11-20 18:00:25.667220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:26.300 [2024-11-20 18:00:25.667303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.300 [2024-11-20 18:00:25.667303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:33:26.563 18:00:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:26.563 18:00:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:33:26.563 18:00:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:26.563 18:00:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:26.563 18:00:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:26.563 18:00:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:26.563 18:00:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:26.563 18:00:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:33:27.135 18:00:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:33:27.135 18:00:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:33:27.396 18:00:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:33:27.396 18:00:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:27.668 18:00:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:33:27.668 18:00:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:33:27.668 18:00:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:33:27.668 18:00:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:33:27.668 18:00:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:33:27.668 [2024-11-20 18:00:27.495637] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:27.668 18:00:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:27.933 18:00:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:33:27.933 18:00:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:28.194 18:00:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:33:28.194 18:00:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:28.455 18:00:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:28.455 [2024-11-20 18:00:28.286942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:28.455 18:00:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:28.715 18:00:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:33:28.715 18:00:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:33:28.715 18:00:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:33:28.715 18:00:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:33:30.099 Initializing NVMe Controllers 00:33:30.099 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:33:30.099 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:33:30.099 Initialization complete. Launching workers. 00:33:30.099 ======================================================== 00:33:30.099 Latency(us) 00:33:30.099 Device Information : IOPS MiB/s Average min max 00:33:30.099 PCIE (0000:65:00.0) NSID 1 from core 0: 79153.63 309.19 403.65 13.26 7194.05 00:33:30.099 ======================================================== 00:33:30.099 Total : 79153.63 309.19 403.65 13.26 7194.05 00:33:30.099 00:33:30.099 18:00:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:31.484 Initializing NVMe Controllers 00:33:31.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:31.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:31.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:31.484 Initialization complete. Launching workers. 00:33:31.484 ======================================================== 00:33:31.484 Latency(us) 00:33:31.484 Device Information : IOPS MiB/s Average min max 00:33:31.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 101.83 0.40 9839.72 78.40 45775.42 00:33:31.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.90 0.24 16283.58 6050.90 47889.09 00:33:31.484 ======================================================== 00:33:31.484 Total : 163.73 0.64 12275.81 78.40 47889.09 00:33:31.484 00:33:31.484 18:00:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:32.425 Initializing NVMe Controllers 00:33:32.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:32.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:32.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:32.425 Initialization complete. Launching workers. 00:33:32.425 ======================================================== 00:33:32.425 Latency(us) 00:33:32.425 Device Information : IOPS MiB/s Average min max 00:33:32.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11749.46 45.90 2741.22 462.78 46206.13 00:33:32.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3861.03 15.08 8313.27 5499.44 16404.88 00:33:32.425 ======================================================== 00:33:32.425 Total : 15610.49 60.98 4119.38 462.78 46206.13 00:33:32.425 00:33:32.425 18:00:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:33:32.425 18:00:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:33:32.425 18:00:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:34.972 Initializing NVMe Controllers 00:33:34.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:34.972 Controller IO queue size 128, less than required. 00:33:34.972 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:34.972 Controller IO queue size 128, less than required. 00:33:34.972 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:34.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:34.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:34.972 Initialization complete. Launching workers. 00:33:34.972 ======================================================== 00:33:34.972 Latency(us) 00:33:34.972 Device Information : IOPS MiB/s Average min max 00:33:34.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1788.95 447.24 73832.06 41260.63 121271.17 00:33:34.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 615.98 154.00 216819.95 56688.47 339030.78 00:33:34.972 ======================================================== 00:33:34.972 Total : 2404.93 601.23 110455.98 41260.63 339030.78 00:33:34.972 00:33:34.972 18:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:33:35.233 No valid NVMe controllers or AIO or URING devices found 00:33:35.233 Initializing NVMe Controllers 00:33:35.233 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:35.233 Controller IO queue size 128, less than required. 00:33:35.233 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:35.233 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:33:35.233 Controller IO queue size 128, less than required. 00:33:35.233 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:35.233 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:33:35.233 WARNING: Some requested NVMe devices were skipped 00:33:35.233 18:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:33:37.775 Initializing NVMe Controllers 00:33:37.775 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:37.775 Controller IO queue size 128, less than required. 00:33:37.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:37.775 Controller IO queue size 128, less than required. 00:33:37.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:37.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:37.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:37.775 Initialization complete. Launching workers. 00:33:37.775 00:33:37.775 ==================== 00:33:37.775 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:33:37.775 TCP transport: 00:33:37.775 polls: 39460 00:33:37.775 idle_polls: 24998 00:33:37.775 sock_completions: 14462 00:33:37.775 nvme_completions: 7521 00:33:37.775 submitted_requests: 11224 00:33:37.775 queued_requests: 1 00:33:37.775 00:33:37.775 ==================== 00:33:37.775 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:33:37.775 TCP transport: 00:33:37.775 polls: 41588 00:33:37.775 idle_polls: 26363 00:33:37.775 sock_completions: 15225 00:33:37.775 nvme_completions: 7641 00:33:37.775 submitted_requests: 11434 00:33:37.775 queued_requests: 1 00:33:37.775 ======================================================== 00:33:37.775 Latency(us) 00:33:37.775 Device Information : IOPS MiB/s Average min max 00:33:37.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1880.00 470.00 68779.15 28901.23 111110.45 00:33:37.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1910.00 477.50 67723.62 31409.39 134392.16 00:33:37.775 ======================================================== 00:33:37.775 Total : 3789.99 947.50 68247.21 28901.23 134392.16 00:33:37.775 00:33:37.775 18:00:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:33:37.775 18:00:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:38.036 18:00:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:33:38.036 18:00:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:33:38.036 18:00:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:33:38.976 18:00:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=ff1b8eac-0bac-428d-96be-bba01c4f7904 00:33:38.976 18:00:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb ff1b8eac-0bac-428d-96be-bba01c4f7904 00:33:38.976 18:00:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=ff1b8eac-0bac-428d-96be-bba01c4f7904 00:33:38.976 18:00:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:38.976 18:00:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:33:38.976 18:00:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:33:38.976 18:00:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:39.236 18:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:39.236 { 00:33:39.236 "uuid": "ff1b8eac-0bac-428d-96be-bba01c4f7904", 00:33:39.236 "name": "lvs_0", 00:33:39.236 "base_bdev": "Nvme0n1", 00:33:39.236 "total_data_clusters": 457407, 00:33:39.236 "free_clusters": 457407, 00:33:39.236 "block_size": 512, 00:33:39.236 "cluster_size": 4194304 00:33:39.236 } 00:33:39.236 ]' 00:33:39.236 18:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="ff1b8eac-0bac-428d-96be-bba01c4f7904") .free_clusters' 00:33:39.236 18:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=457407 00:33:39.236 18:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="ff1b8eac-0bac-428d-96be-bba01c4f7904") .cluster_size' 00:33:39.236 18:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:33:39.236 18:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=1829628 00:33:39.236 18:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 1829628 00:33:39.236 1829628 00:33:39.236 18:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:33:39.236 18:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:33:39.236 18:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ff1b8eac-0bac-428d-96be-bba01c4f7904 lbd_0 20480 00:33:39.496 18:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=609c1c7f-7c75-4d40-87c1-4455bfbea7ff 00:33:39.496 18:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 609c1c7f-7c75-4d40-87c1-4455bfbea7ff lvs_n_0 00:33:41.410 18:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=8f22877e-f6a4-4ce6-945a-39ca291e9a55 00:33:41.410 18:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 8f22877e-f6a4-4ce6-945a-39ca291e9a55 00:33:41.410 18:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=8f22877e-f6a4-4ce6-945a-39ca291e9a55 00:33:41.410 18:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:41.410 18:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:33:41.410 18:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:33:41.410 18:00:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:41.410 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:41.410 { 00:33:41.410 "uuid": "ff1b8eac-0bac-428d-96be-bba01c4f7904", 00:33:41.410 "name": "lvs_0", 00:33:41.410 "base_bdev": "Nvme0n1", 00:33:41.410 "total_data_clusters": 457407, 00:33:41.410 "free_clusters": 452287, 00:33:41.410 "block_size": 512, 00:33:41.410 "cluster_size": 4194304 00:33:41.410 }, 00:33:41.410 { 00:33:41.410 "uuid": "8f22877e-f6a4-4ce6-945a-39ca291e9a55", 00:33:41.410 "name": "lvs_n_0", 00:33:41.410 "base_bdev": "609c1c7f-7c75-4d40-87c1-4455bfbea7ff", 00:33:41.410 "total_data_clusters": 5114, 00:33:41.410 "free_clusters": 5114, 00:33:41.410 "block_size": 512, 00:33:41.410 "cluster_size": 4194304 00:33:41.410 } 00:33:41.410 ]' 00:33:41.410 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="8f22877e-f6a4-4ce6-945a-39ca291e9a55") .free_clusters' 00:33:41.410 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:33:41.410 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="8f22877e-f6a4-4ce6-945a-39ca291e9a55") .cluster_size' 00:33:41.410 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:33:41.410 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:33:41.410 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:33:41.410 20456 00:33:41.410 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:33:41.410 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8f22877e-f6a4-4ce6-945a-39ca291e9a55 lbd_nest_0 20456 00:33:41.671 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=7d6e977c-1f90-4d62-8ffb-3f5dffd7345a 00:33:41.671 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:41.671 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:33:41.671 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 7d6e977c-1f90-4d62-8ffb-3f5dffd7345a 00:33:41.933 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:42.194 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:33:42.194 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:33:42.194 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:42.194 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:42.194 18:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:54.544 Initializing NVMe Controllers 00:33:54.544 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:54.544 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:54.544 Initialization complete. Launching workers. 00:33:54.544 ======================================================== 00:33:54.544 Latency(us) 00:33:54.545 Device Information : IOPS MiB/s Average min max 00:33:54.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.40 0.02 20322.48 216.03 46891.00 00:33:54.545 ======================================================== 00:33:54.545 Total : 49.40 0.02 20322.48 216.03 46891.00 00:33:54.545 00:33:54.545 18:00:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:54.545 18:00:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:02.684 Initializing NVMe Controllers 00:34:02.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:02.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:02.684 Initialization complete. Launching workers. 00:34:02.684 ======================================================== 00:34:02.684 Latency(us) 00:34:02.684 Device Information : IOPS MiB/s Average min max 00:34:02.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 59.08 7.38 16939.59 5986.35 51879.39 00:34:02.684 ======================================================== 00:34:02.684 Total : 59.08 7.38 16939.59 5986.35 51879.39 00:34:02.684 00:34:02.684 18:01:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:34:02.684 18:01:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:34:02.684 18:01:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:14.926 Initializing NVMe Controllers 00:34:14.926 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:14.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:14.926 Initialization complete. Launching workers. 00:34:14.926 ======================================================== 00:34:14.926 Latency(us) 00:34:14.926 Device Information : IOPS MiB/s Average min max 00:34:14.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8819.76 4.31 3628.53 468.32 10086.97 00:34:14.926 ======================================================== 00:34:14.926 Total : 8819.76 4.31 3628.53 468.32 10086.97 00:34:14.926 00:34:14.926 18:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:34:14.926 18:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:24.928 Initializing NVMe Controllers 00:34:24.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:24.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:24.928 Initialization complete. Launching workers. 00:34:24.928 ======================================================== 00:34:24.928 Latency(us) 00:34:24.928 Device Information : IOPS MiB/s Average min max 00:34:24.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5121.90 640.24 6247.19 457.65 17241.34 00:34:24.928 ======================================================== 00:34:24.928 Total : 5121.90 640.24 6247.19 457.65 17241.34 00:34:24.928 00:34:24.928 18:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:34:24.928 18:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:34:24.928 18:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:34.932 Initializing NVMe Controllers 00:34:34.932 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:34.932 Controller IO queue size 128, less than required. 00:34:34.932 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:34.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:34.932 Initialization complete. Launching workers. 00:34:34.932 ======================================================== 00:34:34.932 Latency(us) 00:34:34.932 Device Information : IOPS MiB/s Average min max 00:34:34.932 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15893.90 7.76 8058.63 1419.06 16396.08 00:34:34.932 ======================================================== 00:34:34.932 Total : 15893.90 7.76 8058.63 1419.06 16396.08 00:34:34.932 00:34:34.932 18:01:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:34:34.932 18:01:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:44.929 Initializing NVMe Controllers 00:34:44.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:44.929 Controller IO queue size 128, less than required. 00:34:44.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:44.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:44.929 Initialization complete. Launching workers. 00:34:44.929 ======================================================== 00:34:44.929 Latency(us) 00:34:44.929 Device Information : IOPS MiB/s Average min max 00:34:44.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1206.31 150.79 106780.05 16114.27 222645.95 00:34:44.929 ======================================================== 00:34:44.929 Total : 1206.31 150.79 106780.05 16114.27 222645.95 00:34:44.929 00:34:44.929 18:01:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:44.929 18:01:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7d6e977c-1f90-4d62-8ffb-3f5dffd7345a 00:34:45.872 18:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:34:46.132 18:01:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 609c1c7f-7c75-4d40-87c1-4455bfbea7ff 00:34:46.393 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:34:46.393 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:34:46.393 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:34:46.393 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:46.393 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:34:46.393 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:46.393 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:34:46.393 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:46.393 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:46.393 rmmod nvme_tcp 00:34:46.393 rmmod nvme_fabrics 00:34:46.393 rmmod nvme_keyring 00:34:46.393 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:46.654 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:34:46.654 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:34:46.654 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 2839275 ']' 00:34:46.654 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 2839275 00:34:46.654 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2839275 ']' 00:34:46.654 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2839275 00:34:46.654 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:34:46.654 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:46.654 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2839275 00:34:46.654 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:46.654 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:46.654 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2839275' 00:34:46.654 killing process with pid 2839275 00:34:46.654 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2839275 00:34:46.654 18:01:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2839275 00:34:48.566 18:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:48.566 18:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:48.566 18:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:48.566 18:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:34:48.566 18:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:34:48.566 18:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:48.566 18:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:34:48.567 18:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:48.567 18:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:48.567 18:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.567 18:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:48.567 18:01:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:51.110 00:34:51.110 real 1m32.778s 00:34:51.110 user 5m27.065s 00:34:51.110 sys 0m15.819s 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:51.110 ************************************ 00:34:51.110 END TEST nvmf_perf 00:34:51.110 ************************************ 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.110 ************************************ 00:34:51.110 START TEST nvmf_fio_host 00:34:51.110 ************************************ 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:34:51.110 * Looking for test storage... 00:34:51.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:51.110 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:51.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.111 --rc genhtml_branch_coverage=1 00:34:51.111 --rc genhtml_function_coverage=1 00:34:51.111 --rc genhtml_legend=1 00:34:51.111 --rc geninfo_all_blocks=1 00:34:51.111 --rc geninfo_unexecuted_blocks=1 00:34:51.111 00:34:51.111 ' 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:51.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.111 --rc genhtml_branch_coverage=1 00:34:51.111 --rc genhtml_function_coverage=1 00:34:51.111 --rc genhtml_legend=1 00:34:51.111 --rc geninfo_all_blocks=1 00:34:51.111 --rc geninfo_unexecuted_blocks=1 00:34:51.111 00:34:51.111 ' 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:51.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.111 --rc genhtml_branch_coverage=1 00:34:51.111 --rc genhtml_function_coverage=1 00:34:51.111 --rc genhtml_legend=1 00:34:51.111 --rc geninfo_all_blocks=1 00:34:51.111 --rc geninfo_unexecuted_blocks=1 00:34:51.111 00:34:51.111 ' 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:51.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.111 --rc genhtml_branch_coverage=1 00:34:51.111 --rc genhtml_function_coverage=1 00:34:51.111 --rc genhtml_legend=1 00:34:51.111 --rc geninfo_all_blocks=1 00:34:51.111 --rc geninfo_unexecuted_blocks=1 00:34:51.111 00:34:51.111 ' 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:51.111 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:51.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:51.112 18:01:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:59.252 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:59.253 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:59.253 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:59.253 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:59.253 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # is_hw=yes 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:59.253 18:01:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:59.253 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:59.253 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:59.253 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:59.253 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:59.253 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:59.253 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:59.253 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:59.253 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:59.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:59.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:34:59.253 00:34:59.253 --- 10.0.0.2 ping statistics --- 00:34:59.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.253 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:34:59.253 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:59.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:59.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:34:59.253 00:34:59.253 --- 10.0.0.1 ping statistics --- 00:34:59.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.253 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:34:59.253 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:59.253 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # return 0 00:34:59.253 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:59.253 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:59.253 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:59.253 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:59.253 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:59.253 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:59.254 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:59.254 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:34:59.254 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:34:59.254 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:59.254 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.254 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2858995 00:34:59.254 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:59.254 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:59.254 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2858995 00:34:59.254 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2858995 ']' 00:34:59.254 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.254 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:59.254 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.254 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:59.254 18:01:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.254 [2024-11-20 18:01:58.316580] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:34:59.254 [2024-11-20 18:01:58.316647] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:59.254 [2024-11-20 18:01:58.404669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:59.254 [2024-11-20 18:01:58.453510] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:59.254 [2024-11-20 18:01:58.453560] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:59.254 [2024-11-20 18:01:58.453568] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:59.254 [2024-11-20 18:01:58.453576] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:59.254 [2024-11-20 18:01:58.453583] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:59.254 [2024-11-20 18:01:58.453654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.254 [2024-11-20 18:01:58.453810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:34:59.254 [2024-11-20 18:01:58.453941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.254 [2024-11-20 18:01:58.453943] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:34:59.254 18:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:59.254 18:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:34:59.254 18:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:59.514 [2024-11-20 18:01:59.321228] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.514 18:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:34:59.514 18:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:59.514 18:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.514 18:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:34:59.776 Malloc1 00:34:59.776 18:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:00.038 18:01:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:00.300 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:00.300 [2024-11-20 18:02:00.194226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:00.562 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:00.844 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:00.844 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:00.844 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:35:00.844 18:02:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:01.106 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:35:01.106 fio-3.35 00:35:01.106 Starting 1 thread 00:35:03.646 00:35:03.646 test: (groupid=0, jobs=1): err= 0: pid=2859627: Wed Nov 20 18:02:03 2024 00:35:03.646 read: IOPS=13.8k, BW=53.9MiB/s (56.5MB/s)(108MiB/2005msec) 00:35:03.646 slat (usec): min=2, max=288, avg= 2.16, stdev= 2.49 00:35:03.646 clat (usec): min=3223, max=9255, avg=5113.90, stdev=383.70 00:35:03.647 lat (usec): min=3225, max=9262, avg=5116.06, stdev=383.94 00:35:03.647 clat percentiles (usec): 00:35:03.647 | 1.00th=[ 4228], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:35:03.647 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:35:03.647 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:35:03.647 | 99.00th=[ 5997], 99.50th=[ 6456], 99.90th=[ 8356], 99.95th=[ 8717], 00:35:03.647 | 99.99th=[ 9110] 00:35:03.647 bw ( KiB/s): min=53864, max=55640, per=100.00%, avg=55148.00, stdev=859.21, samples=4 00:35:03.647 iops : min=13466, max=13910, avg=13787.00, stdev=214.80, samples=4 00:35:03.647 write: IOPS=13.8k, BW=53.8MiB/s (56.4MB/s)(108MiB/2005msec); 0 zone resets 00:35:03.647 slat (usec): min=2, max=273, avg= 2.23, stdev= 1.81 00:35:03.647 clat (usec): min=2514, max=8270, avg=4130.91, stdev=332.48 00:35:03.647 lat (usec): min=2516, max=8272, avg=4133.14, stdev=332.76 00:35:03.647 clat percentiles (usec): 00:35:03.647 | 1.00th=[ 3425], 5.00th=[ 3687], 10.00th=[ 3785], 20.00th=[ 3916], 00:35:03.647 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:35:03.647 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4555], 00:35:03.647 | 99.00th=[ 4883], 99.50th=[ 5866], 99.90th=[ 7046], 99.95th=[ 7504], 00:35:03.647 | 99.99th=[ 8160] 00:35:03.647 bw ( KiB/s): min=54216, max=55584, per=100.00%, avg=55106.00, stdev=608.29, samples=4 00:35:03.647 iops : min=13554, max=13896, avg=13776.50, stdev=152.07, samples=4 00:35:03.647 lat (msec) : 4=16.10%, 10=83.90% 00:35:03.647 cpu : usr=72.90%, sys=25.95%, ctx=19, majf=0, minf=18 00:35:03.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:35:03.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:03.647 issued rwts: total=27643,27611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:03.647 00:35:03.647 Run status group 0 (all jobs): 00:35:03.647 READ: bw=53.9MiB/s (56.5MB/s), 53.9MiB/s-53.9MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2005-2005msec 00:35:03.647 WRITE: bw=53.8MiB/s (56.4MB/s), 53.8MiB/s-53.8MiB/s (56.4MB/s-56.4MB/s), io=108MiB (113MB), run=2005-2005msec 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:35:03.647 18:02:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:35:03.920 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:35:03.920 fio-3.35 00:35:03.920 Starting 1 thread 00:35:06.462 00:35:06.462 test: (groupid=0, jobs=1): err= 0: pid=2860437: Wed Nov 20 18:02:06 2024 00:35:06.462 read: IOPS=9570, BW=150MiB/s (157MB/s)(300MiB/2004msec) 00:35:06.462 slat (usec): min=3, max=110, avg= 3.60, stdev= 1.60 00:35:06.462 clat (usec): min=1876, max=15136, avg=8240.33, stdev=1930.42 00:35:06.462 lat (usec): min=1879, max=15139, avg=8243.93, stdev=1930.58 00:35:06.462 clat percentiles (usec): 00:35:06.462 | 1.00th=[ 4293], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6456], 00:35:06.462 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8094], 60.00th=[ 8717], 00:35:06.462 | 70.00th=[ 9372], 80.00th=[10421], 90.00th=[10814], 95.00th=[10945], 00:35:06.462 | 99.00th=[12780], 99.50th=[13173], 99.90th=[14091], 99.95th=[14353], 00:35:06.462 | 99.99th=[15139] 00:35:06.462 bw ( KiB/s): min=64800, max=91808, per=49.31%, avg=75512.00, stdev=11575.55, samples=4 00:35:06.462 iops : min= 4050, max= 5738, avg=4719.50, stdev=723.47, samples=4 00:35:06.462 write: IOPS=5896, BW=92.1MiB/s (96.6MB/s)(155MiB/1679msec); 0 zone resets 00:35:06.462 slat (usec): min=39, max=398, avg=40.96, stdev= 7.91 00:35:06.462 clat (usec): min=2098, max=15823, avg=8957.83, stdev=1352.14 00:35:06.462 lat (usec): min=2138, max=15863, avg=8998.79, stdev=1354.36 00:35:06.462 clat percentiles (usec): 00:35:06.462 | 1.00th=[ 5997], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 7898], 00:35:06.462 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9241], 00:35:06.462 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10683], 95.00th=[11207], 00:35:06.462 | 99.00th=[12518], 99.50th=[13960], 99.90th=[15139], 99.95th=[15270], 00:35:06.462 | 99.99th=[15795] 00:35:06.462 bw ( KiB/s): min=67456, max=94848, per=83.42%, avg=78704.00, stdev=11695.39, samples=4 00:35:06.462 iops : min= 4216, max= 5928, avg=4919.00, stdev=730.96, samples=4 00:35:06.462 lat (msec) : 2=0.01%, 4=0.52%, 10=76.92%, 20=22.55% 00:35:06.462 cpu : usr=85.52%, sys=13.08%, ctx=17, majf=0, minf=48 00:35:06.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:35:06.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:06.462 issued rwts: total=19180,9901,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.462 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:06.462 00:35:06.462 Run status group 0 (all jobs): 00:35:06.462 READ: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=300MiB (314MB), run=2004-2004msec 00:35:06.462 WRITE: bw=92.1MiB/s (96.6MB/s), 92.1MiB/s-92.1MiB/s (96.6MB/s-96.6MB/s), io=155MiB (162MB), run=1679-1679msec 00:35:06.462 18:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:06.462 18:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:35:06.462 18:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:35:06.462 18:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:35:06.462 18:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:35:06.462 18:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:35:06.462 18:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:06.462 18:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:06.462 18:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:35:06.723 18:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:35:06.723 18:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:35:06.723 18:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:35:06.985 Nvme0n1 00:35:06.985 18:02:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:35:07.927 18:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=f703f6a2-528a-45c8-aeec-d33fa96ece0d 00:35:07.927 18:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb f703f6a2-528a-45c8-aeec-d33fa96ece0d 00:35:07.927 18:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=f703f6a2-528a-45c8-aeec-d33fa96ece0d 00:35:07.927 18:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:35:07.927 18:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:35:07.927 18:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:35:07.927 18:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:07.927 18:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:35:07.927 { 00:35:07.927 "uuid": "f703f6a2-528a-45c8-aeec-d33fa96ece0d", 00:35:07.927 "name": "lvs_0", 00:35:07.927 "base_bdev": "Nvme0n1", 00:35:07.927 "total_data_clusters": 1787, 00:35:07.927 "free_clusters": 1787, 00:35:07.927 "block_size": 512, 00:35:07.927 "cluster_size": 1073741824 00:35:07.927 } 00:35:07.927 ]' 00:35:07.927 18:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="f703f6a2-528a-45c8-aeec-d33fa96ece0d") .free_clusters' 00:35:07.927 18:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1787 00:35:07.927 18:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="f703f6a2-528a-45c8-aeec-d33fa96ece0d") .cluster_size' 00:35:07.927 18:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:35:07.927 18:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1829888 00:35:07.927 18:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1829888 00:35:07.927 1829888 00:35:07.927 18:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:35:08.187 9f3474b3-6bf6-4081-a601-8a23940c8b93 00:35:08.187 18:02:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:35:08.448 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:35:08.448 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:35:08.710 18:02:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:08.972 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:35:08.972 fio-3.35 00:35:08.972 Starting 1 thread 00:35:11.518 00:35:11.518 test: (groupid=0, jobs=1): err= 0: pid=2861600: Wed Nov 20 18:02:11 2024 00:35:11.518 read: IOPS=10.4k, BW=40.5MiB/s (42.5MB/s)(81.2MiB/2005msec) 00:35:11.518 slat (usec): min=2, max=116, avg= 2.21, stdev= 1.11 00:35:11.518 clat (usec): min=2434, max=11346, avg=6824.19, stdev=503.61 00:35:11.518 lat (usec): min=2452, max=11349, avg=6826.40, stdev=503.55 00:35:11.518 clat percentiles (usec): 00:35:11.518 | 1.00th=[ 5669], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6456], 00:35:11.518 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:35:11.518 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[ 7439], 95.00th=[ 7635], 00:35:11.518 | 99.00th=[ 7963], 99.50th=[ 8094], 99.90th=[ 8717], 99.95th=[10421], 00:35:11.518 | 99.99th=[11338] 00:35:11.518 bw ( KiB/s): min=40624, max=41952, per=99.88%, avg=41444.00, stdev=580.78, samples=4 00:35:11.518 iops : min=10156, max=10488, avg=10361.00, stdev=145.19, samples=4 00:35:11.518 write: IOPS=10.4k, BW=40.5MiB/s (42.5MB/s)(81.3MiB/2005msec); 0 zone resets 00:35:11.518 slat (nsec): min=2082, max=101197, avg=2280.48, stdev=745.76 00:35:11.518 clat (usec): min=1223, max=9504, avg=5455.77, stdev=435.33 00:35:11.518 lat (usec): min=1231, max=9506, avg=5458.05, stdev=435.31 00:35:11.518 clat percentiles (usec): 00:35:11.518 | 1.00th=[ 4490], 5.00th=[ 4752], 10.00th=[ 4948], 20.00th=[ 5145], 00:35:11.518 | 30.00th=[ 5211], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5538], 00:35:11.518 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 5997], 95.00th=[ 6128], 00:35:11.518 | 99.00th=[ 6456], 99.50th=[ 6587], 99.90th=[ 7898], 99.95th=[ 8586], 00:35:11.518 | 99.99th=[ 9503] 00:35:11.518 bw ( KiB/s): min=41160, max=41896, per=100.00%, avg=41522.00, stdev=313.22, samples=4 00:35:11.518 iops : min=10290, max=10474, avg=10380.50, stdev=78.30, samples=4 00:35:11.518 lat (msec) : 2=0.02%, 4=0.10%, 10=99.85%, 20=0.03% 00:35:11.518 cpu : usr=72.16%, sys=26.85%, ctx=75, majf=0, minf=20 00:35:11.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:35:11.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:11.518 issued rwts: total=20799,20811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:11.518 00:35:11.518 Run status group 0 (all jobs): 00:35:11.518 READ: bw=40.5MiB/s (42.5MB/s), 40.5MiB/s-40.5MiB/s (42.5MB/s-42.5MB/s), io=81.2MiB (85.2MB), run=2005-2005msec 00:35:11.518 WRITE: bw=40.5MiB/s (42.5MB/s), 40.5MiB/s-40.5MiB/s (42.5MB/s-42.5MB/s), io=81.3MiB (85.2MB), run=2005-2005msec 00:35:11.518 18:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:11.779 18:02:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:35:12.720 18:02:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=d0a67f0b-453c-4e2f-8e2d-4cd148408a7f 00:35:12.720 18:02:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb d0a67f0b-453c-4e2f-8e2d-4cd148408a7f 00:35:12.720 18:02:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=d0a67f0b-453c-4e2f-8e2d-4cd148408a7f 00:35:12.720 18:02:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:35:12.720 18:02:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:35:12.720 18:02:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:35:12.720 18:02:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:12.720 18:02:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:35:12.720 { 00:35:12.720 "uuid": "f703f6a2-528a-45c8-aeec-d33fa96ece0d", 00:35:12.720 "name": "lvs_0", 00:35:12.720 "base_bdev": "Nvme0n1", 00:35:12.720 "total_data_clusters": 1787, 00:35:12.720 "free_clusters": 0, 00:35:12.720 "block_size": 512, 00:35:12.720 "cluster_size": 1073741824 00:35:12.720 }, 00:35:12.720 { 00:35:12.720 "uuid": "d0a67f0b-453c-4e2f-8e2d-4cd148408a7f", 00:35:12.720 "name": "lvs_n_0", 00:35:12.720 "base_bdev": "9f3474b3-6bf6-4081-a601-8a23940c8b93", 00:35:12.720 "total_data_clusters": 457025, 00:35:12.720 "free_clusters": 457025, 00:35:12.720 "block_size": 512, 00:35:12.720 "cluster_size": 4194304 00:35:12.720 } 00:35:12.720 ]' 00:35:12.720 18:02:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d0a67f0b-453c-4e2f-8e2d-4cd148408a7f") .free_clusters' 00:35:12.720 18:02:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=457025 00:35:12.720 18:02:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d0a67f0b-453c-4e2f-8e2d-4cd148408a7f") .cluster_size' 00:35:12.981 18:02:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:35:12.981 18:02:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1828100 00:35:12.981 18:02:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1828100 00:35:12.981 1828100 00:35:12.981 18:02:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:35:13.552 37e1c3b9-8547-4647-8d73-fc6770aa6c3b 00:35:13.552 18:02:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:35:13.813 18:02:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:35:14.074 18:02:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:35:14.336 18:02:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:14.660 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:35:14.660 fio-3.35 00:35:14.660 Starting 1 thread 00:35:17.235 00:35:17.235 test: (groupid=0, jobs=1): err= 0: pid=2862790: Wed Nov 20 18:02:16 2024 00:35:17.235 read: IOPS=9259, BW=36.2MiB/s (37.9MB/s)(72.5MiB/2005msec) 00:35:17.235 slat (usec): min=2, max=111, avg= 2.21, stdev= 1.15 00:35:17.235 clat (usec): min=2031, max=12521, avg=7651.08, stdev=585.35 00:35:17.235 lat (usec): min=2048, max=12523, avg=7653.29, stdev=585.29 00:35:17.235 clat percentiles (usec): 00:35:17.235 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:35:17.235 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:35:17.235 | 70.00th=[ 7963], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:35:17.235 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[10945], 99.95th=[11731], 00:35:17.235 | 99.99th=[12518] 00:35:17.235 bw ( KiB/s): min=36072, max=37624, per=99.87%, avg=36992.00, stdev=655.58, samples=4 00:35:17.235 iops : min= 9018, max= 9406, avg=9248.00, stdev=163.89, samples=4 00:35:17.235 write: IOPS=9265, BW=36.2MiB/s (38.0MB/s)(72.6MiB/2005msec); 0 zone resets 00:35:17.235 slat (nsec): min=2092, max=114009, avg=2277.09, stdev=880.99 00:35:17.235 clat (usec): min=1061, max=11595, avg=6105.31, stdev=506.08 00:35:17.235 lat (usec): min=1068, max=11597, avg=6107.58, stdev=506.07 00:35:17.235 clat percentiles (usec): 00:35:17.235 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:35:17.235 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6194], 00:35:17.235 | 70.00th=[ 6325], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:35:17.235 | 99.00th=[ 7177], 99.50th=[ 7373], 99.90th=[ 9241], 99.95th=[10028], 00:35:17.235 | 99.99th=[11469] 00:35:17.235 bw ( KiB/s): min=36800, max=37248, per=99.94%, avg=37042.00, stdev=190.48, samples=4 00:35:17.235 iops : min= 9200, max= 9312, avg=9260.50, stdev=47.62, samples=4 00:35:17.235 lat (msec) : 2=0.01%, 4=0.11%, 10=99.78%, 20=0.11% 00:35:17.235 cpu : usr=70.96%, sys=28.19%, ctx=50, majf=0, minf=20 00:35:17.235 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:35:17.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:17.235 issued rwts: total=18566,18578,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.235 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:17.235 00:35:17.235 Run status group 0 (all jobs): 00:35:17.235 READ: bw=36.2MiB/s (37.9MB/s), 36.2MiB/s-36.2MiB/s (37.9MB/s-37.9MB/s), io=72.5MiB (76.0MB), run=2005-2005msec 00:35:17.235 WRITE: bw=36.2MiB/s (38.0MB/s), 36.2MiB/s-36.2MiB/s (38.0MB/s-38.0MB/s), io=72.6MiB (76.1MB), run=2005-2005msec 00:35:17.235 18:02:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:35:17.235 18:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:35:17.235 18:02:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:35:19.148 18:02:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:35:19.148 18:02:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:35:19.719 18:02:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:35:19.980 18:02:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:35:21.893 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:21.893 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:22.153 rmmod nvme_tcp 00:35:22.153 rmmod nvme_fabrics 00:35:22.153 rmmod nvme_keyring 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 2858995 ']' 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 2858995 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2858995 ']' 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2858995 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2858995 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2858995' 00:35:22.153 killing process with pid 2858995 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2858995 00:35:22.153 18:02:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2858995 00:35:22.153 18:02:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:22.153 18:02:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:22.153 18:02:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:22.153 18:02:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:35:22.153 18:02:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:35:22.153 18:02:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:22.153 18:02:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:35:22.414 18:02:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:22.414 18:02:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:22.414 18:02:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.414 18:02:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:22.414 18:02:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.326 18:02:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:24.326 00:35:24.327 real 0m33.676s 00:35:24.327 user 2m44.137s 00:35:24.327 sys 0m10.319s 00:35:24.327 18:02:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:24.327 18:02:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.327 ************************************ 00:35:24.327 END TEST nvmf_fio_host 00:35:24.327 ************************************ 00:35:24.327 18:02:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:35:24.327 18:02:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:24.327 18:02:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:24.327 18:02:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.327 ************************************ 00:35:24.327 START TEST nvmf_failover 00:35:24.327 ************************************ 00:35:24.327 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:35:24.589 * Looking for test storage... 00:35:24.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:24.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.589 --rc genhtml_branch_coverage=1 00:35:24.589 --rc genhtml_function_coverage=1 00:35:24.589 --rc genhtml_legend=1 00:35:24.589 --rc geninfo_all_blocks=1 00:35:24.589 --rc geninfo_unexecuted_blocks=1 00:35:24.589 00:35:24.589 ' 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:24.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.589 --rc genhtml_branch_coverage=1 00:35:24.589 --rc genhtml_function_coverage=1 00:35:24.589 --rc genhtml_legend=1 00:35:24.589 --rc geninfo_all_blocks=1 00:35:24.589 --rc geninfo_unexecuted_blocks=1 00:35:24.589 00:35:24.589 ' 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:24.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.589 --rc genhtml_branch_coverage=1 00:35:24.589 --rc genhtml_function_coverage=1 00:35:24.589 --rc genhtml_legend=1 00:35:24.589 --rc geninfo_all_blocks=1 00:35:24.589 --rc geninfo_unexecuted_blocks=1 00:35:24.589 00:35:24.589 ' 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:24.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.589 --rc genhtml_branch_coverage=1 00:35:24.589 --rc genhtml_function_coverage=1 00:35:24.589 --rc genhtml_legend=1 00:35:24.589 --rc geninfo_all_blocks=1 00:35:24.589 --rc geninfo_unexecuted_blocks=1 00:35:24.589 00:35:24.589 ' 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:24.589 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:24.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:35:24.590 18:02:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:32.738 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:32.738 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:32.738 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:32.738 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # is_hw=yes 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:32.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:32.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:35:32.738 00:35:32.738 --- 10.0.0.2 ping statistics --- 00:35:32.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.738 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:32.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:32.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:35:32.738 00:35:32.738 --- 10.0.0.1 ping statistics --- 00:35:32.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.738 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:32.738 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # return 0 00:35:32.739 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:32.739 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:32.739 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:32.739 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:32.739 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:32.739 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:32.739 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:32.739 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:35:32.739 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:32.739 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:32.739 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:32.739 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=2868183 00:35:32.739 18:02:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 2868183 00:35:32.739 18:02:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:32.739 18:02:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2868183 ']' 00:35:32.739 18:02:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.739 18:02:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:32.739 18:02:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.739 18:02:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:32.739 18:02:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:32.739 [2024-11-20 18:02:32.060126] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:35:32.739 [2024-11-20 18:02:32.060203] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.739 [2024-11-20 18:02:32.150227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:32.739 [2024-11-20 18:02:32.198289] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:32.739 [2024-11-20 18:02:32.198347] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:32.739 [2024-11-20 18:02:32.198359] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:32.739 [2024-11-20 18:02:32.198369] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:32.739 [2024-11-20 18:02:32.198377] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:32.739 [2024-11-20 18:02:32.198543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:32.739 [2024-11-20 18:02:32.198700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:32.739 [2024-11-20 18:02:32.198701] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:33.001 18:02:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:33.001 18:02:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:35:33.001 18:02:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:33.001 18:02:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:33.001 18:02:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:33.262 18:02:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:33.262 18:02:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:33.262 [2024-11-20 18:02:33.094503] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:33.262 18:02:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:35:33.523 Malloc0 00:35:33.523 18:02:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:33.785 18:02:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:34.046 18:02:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:34.046 [2024-11-20 18:02:33.913501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:34.046 18:02:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:34.307 [2024-11-20 18:02:34.114171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:34.307 18:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:35:34.569 [2024-11-20 18:02:34.314969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:35:34.569 18:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2868760 00:35:34.569 18:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:35:34.569 18:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:34.569 18:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2868760 /var/tmp/bdevperf.sock 00:35:34.569 18:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2868760 ']' 00:35:34.569 18:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:34.569 18:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:34.569 18:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:34.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:34.569 18:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:34.569 18:02:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:35.510 18:02:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:35.510 18:02:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:35:35.510 18:02:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:35.770 NVMe0n1 00:35:35.770 18:02:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:36.030 00:35:36.030 18:02:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2868925 00:35:36.030 18:02:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:36.030 18:02:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:35:36.971 18:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:37.231 [2024-11-20 18:02:36.907068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.231 [2024-11-20 18:02:36.907106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.231 [2024-11-20 18:02:36.907112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.231 [2024-11-20 18:02:36.907117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.231 [2024-11-20 18:02:36.907122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.231 [2024-11-20 18:02:36.907127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.231 [2024-11-20 18:02:36.907132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.231 [2024-11-20 18:02:36.907136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.231 [2024-11-20 18:02:36.907146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.231 [2024-11-20 18:02:36.907150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.231 [2024-11-20 18:02:36.907155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.231 [2024-11-20 18:02:36.907162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 [2024-11-20 18:02:36.907309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2211850 is same with the state(6) to be set 00:35:37.232 18:02:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:35:40.532 18:02:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:40.532 00:35:40.532 18:02:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:40.532 18:02:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:35:43.837 18:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:43.837 [2024-11-20 18:02:43.592674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:43.837 18:02:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:35:44.780 18:02:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:35:45.041 18:02:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2868925 00:35:51.630 { 00:35:51.630 "results": [ 00:35:51.630 { 00:35:51.630 "job": "NVMe0n1", 00:35:51.630 "core_mask": "0x1", 00:35:51.630 "workload": "verify", 00:35:51.630 "status": "finished", 00:35:51.630 "verify_range": { 00:35:51.630 "start": 0, 00:35:51.630 "length": 16384 00:35:51.630 }, 00:35:51.630 "queue_depth": 128, 00:35:51.630 "io_size": 4096, 00:35:51.630 "runtime": 15.003948, 00:35:51.630 "iops": 12356.214511007369, 00:35:51.630 "mibps": 48.266462933622535, 00:35:51.630 "io_failed": 8213, 00:35:51.630 "io_timeout": 0, 00:35:51.630 "avg_latency_us": 9897.990232345928, 00:35:51.630 "min_latency_us": 542.72, 00:35:51.630 "max_latency_us": 15182.506666666666 00:35:51.630 } 00:35:51.630 ], 00:35:51.630 "core_count": 1 00:35:51.630 } 00:35:51.630 18:02:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2868760 00:35:51.630 18:02:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2868760 ']' 00:35:51.630 18:02:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2868760 00:35:51.630 18:02:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:35:51.630 18:02:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:51.630 18:02:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2868760 00:35:51.630 18:02:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:51.630 18:02:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:51.630 18:02:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2868760' 00:35:51.630 killing process with pid 2868760 00:35:51.630 18:02:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2868760 00:35:51.630 18:02:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2868760 00:35:51.630 18:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:51.630 [2024-11-20 18:02:34.411142] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:35:51.630 [2024-11-20 18:02:34.411234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2868760 ] 00:35:51.630 [2024-11-20 18:02:34.493988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:51.630 [2024-11-20 18:02:34.540666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.630 Running I/O for 15 seconds... 00:35:51.630 11237.00 IOPS, 43.89 MiB/s [2024-11-20T17:02:51.546Z] [2024-11-20 18:02:36.908436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.908985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.908995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.631 [2024-11-20 18:02:36.909002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.631 [2024-11-20 18:02:36.909012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.632 [2024-11-20 18:02:36.909555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.632 [2024-11-20 18:02:36.909562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.909985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.909993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.910002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.910009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.910018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.910026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.910035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.910042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.910051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.910059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.910070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.910077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.910086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.910093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.910103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.910110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.910119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.910126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.910135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.633 [2024-11-20 18:02:36.910143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.633 [2024-11-20 18:02:36.910152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.634 [2024-11-20 18:02:36.910528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.634 [2024-11-20 18:02:36.910545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.634 [2024-11-20 18:02:36.910561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.634 [2024-11-20 18:02:36.910578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.634 [2024-11-20 18:02:36.910596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.634 [2024-11-20 18:02:36.910613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.634 [2024-11-20 18:02:36.910630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.634 [2024-11-20 18:02:36.910657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.634 [2024-11-20 18:02:36.910664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97280 len:8 PRP1 0x0 PRP2 0x0 00:35:51.634 [2024-11-20 18:02:36.910672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910709] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6cf070 was disconnected and freed. reset controller. 00:35:51.634 [2024-11-20 18:02:36.910718] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:35:51.634 [2024-11-20 18:02:36.910737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.634 [2024-11-20 18:02:36.910746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.634 [2024-11-20 18:02:36.910754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.635 [2024-11-20 18:02:36.910762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:36.910770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.635 [2024-11-20 18:02:36.910777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:36.910786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.635 [2024-11-20 18:02:36.910793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:36.910801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:51.635 [2024-11-20 18:02:36.910829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6adf50 (9): Bad file descriptor 00:35:51.635 [2024-11-20 18:02:36.914367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:51.635 [2024-11-20 18:02:36.995095] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:51.635 10808.50 IOPS, 42.22 MiB/s [2024-11-20T17:02:51.551Z] 11048.00 IOPS, 43.16 MiB/s [2024-11-20T17:02:51.551Z] 11259.25 IOPS, 43.98 MiB/s [2024-11-20T17:02:51.551Z] [2024-11-20 18:02:40.405381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.635 [2024-11-20 18:02:40.405414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.635 [2024-11-20 18:02:40.405433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.635 [2024-11-20 18:02:40.405450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.635 [2024-11-20 18:02:40.405462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.635 [2024-11-20 18:02:40.405474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.635 [2024-11-20 18:02:40.405486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.635 [2024-11-20 18:02:40.405497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.635 [2024-11-20 18:02:40.405725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.635 [2024-11-20 18:02:40.405731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.636 [2024-11-20 18:02:40.405784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.405994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.405999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.636 [2024-11-20 18:02:40.406006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.636 [2024-11-20 18:02:40.406011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.637 [2024-11-20 18:02:40.406106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.637 [2024-11-20 18:02:40.406118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.637 [2024-11-20 18:02:40.406129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.637 [2024-11-20 18:02:40.406141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.637 [2024-11-20 18:02:40.406153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.637 [2024-11-20 18:02:40.406169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.637 [2024-11-20 18:02:40.406181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.637 [2024-11-20 18:02:40.406409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.637 [2024-11-20 18:02:40.406416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.638 [2024-11-20 18:02:40.406826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.638 [2024-11-20 18:02:40.406832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.639 [2024-11-20 18:02:40.406838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:40.406844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.639 [2024-11-20 18:02:40.406849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:40.406856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.639 [2024-11-20 18:02:40.406861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:40.406868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.639 [2024-11-20 18:02:40.406873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:40.406879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.639 [2024-11-20 18:02:40.406885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:40.406892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.639 [2024-11-20 18:02:40.406897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:40.406904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.639 [2024-11-20 18:02:40.406908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:40.406915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.639 [2024-11-20 18:02:40.406920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:40.406927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.639 [2024-11-20 18:02:40.406932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:40.406938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d1050 is same with the state(6) to be set 00:35:51.639 [2024-11-20 18:02:40.406944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.639 [2024-11-20 18:02:40.406949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.639 [2024-11-20 18:02:40.406954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46744 len:8 PRP1 0x0 PRP2 0x0 00:35:51.639 [2024-11-20 18:02:40.406960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:40.406988] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6d1050 was disconnected and freed. reset controller. 00:35:51.639 [2024-11-20 18:02:40.406996] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:35:51.639 [2024-11-20 18:02:40.407011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.639 [2024-11-20 18:02:40.407017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:40.407023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.639 [2024-11-20 18:02:40.407028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:40.407035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.639 [2024-11-20 18:02:40.407040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:40.407046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.639 [2024-11-20 18:02:40.407051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:40.407057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:51.639 [2024-11-20 18:02:40.409498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:51.639 [2024-11-20 18:02:40.409518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6adf50 (9): Bad file descriptor 00:35:51.639 [2024-11-20 18:02:40.483248] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:51.639 11354.60 IOPS, 44.35 MiB/s [2024-11-20T17:02:51.555Z] 11604.50 IOPS, 45.33 MiB/s [2024-11-20T17:02:51.555Z] 11776.57 IOPS, 46.00 MiB/s [2024-11-20T17:02:51.555Z] 11891.62 IOPS, 46.45 MiB/s [2024-11-20T17:02:51.555Z] [2024-11-20 18:02:44.784136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.639 [2024-11-20 18:02:44.784178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:44.784191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.639 [2024-11-20 18:02:44.784198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:44.784204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.639 [2024-11-20 18:02:44.784209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:44.784216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:124360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.639 [2024-11-20 18:02:44.784222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:44.784228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.639 [2024-11-20 18:02:44.784239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:44.784246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.639 [2024-11-20 18:02:44.784252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:44.784258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.639 [2024-11-20 18:02:44.784263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:44.784270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.639 [2024-11-20 18:02:44.784275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:44.784282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:124400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.639 [2024-11-20 18:02:44.784287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:44.784294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.639 [2024-11-20 18:02:44.784299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:44.784305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.639 [2024-11-20 18:02:44.784311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:44.784317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.639 [2024-11-20 18:02:44.784323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:44.784330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.639 [2024-11-20 18:02:44.784335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:44.784342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:124440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.639 [2024-11-20 18:02:44.784347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:44.784354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.639 [2024-11-20 18:02:44.784360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.639 [2024-11-20 18:02:44.784366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:124456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.640 [2024-11-20 18:02:44.784748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.640 [2024-11-20 18:02:44.784754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.641 [2024-11-20 18:02:44.784759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.641 [2024-11-20 18:02:44.784771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.641 [2024-11-20 18:02:44.784782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.641 [2024-11-20 18:02:44.784794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.641 [2024-11-20 18:02:44.784805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.641 [2024-11-20 18:02:44.784817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.641 [2024-11-20 18:02:44.784828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.641 [2024-11-20 18:02:44.784841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.641 [2024-11-20 18:02:44.784852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.641 [2024-11-20 18:02:44.784864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.641 [2024-11-20 18:02:44.784875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.641 [2024-11-20 18:02:44.784887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.641 [2024-11-20 18:02:44.784898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.641 [2024-11-20 18:02:44.784910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:124264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.641 [2024-11-20 18:02:44.784921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.641 [2024-11-20 18:02:44.784933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.641 [2024-11-20 18:02:44.784944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.641 [2024-11-20 18:02:44.784956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.641 [2024-11-20 18:02:44.784967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.641 [2024-11-20 18:02:44.784981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.784988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.641 [2024-11-20 18:02:44.784993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.785000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.641 [2024-11-20 18:02:44.785005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.785011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.641 [2024-11-20 18:02:44.785016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.785023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.641 [2024-11-20 18:02:44.785028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.641 [2024-11-20 18:02:44.785034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.641 [2024-11-20 18:02:44.785039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:124808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:124816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:51.642 [2024-11-20 18:02:44.785295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.642 [2024-11-20 18:02:44.785317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124976 len:8 PRP1 0x0 PRP2 0x0 00:35:51.642 [2024-11-20 18:02:44.785322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.642 [2024-11-20 18:02:44.785333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.642 [2024-11-20 18:02:44.785337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124984 len:8 PRP1 0x0 PRP2 0x0 00:35:51.642 [2024-11-20 18:02:44.785342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.642 [2024-11-20 18:02:44.785352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.642 [2024-11-20 18:02:44.785357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124992 len:8 PRP1 0x0 PRP2 0x0 00:35:51.642 [2024-11-20 18:02:44.785362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.642 [2024-11-20 18:02:44.785372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.642 [2024-11-20 18:02:44.785376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125000 len:8 PRP1 0x0 PRP2 0x0 00:35:51.642 [2024-11-20 18:02:44.785381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.642 [2024-11-20 18:02:44.785391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.642 [2024-11-20 18:02:44.785395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125008 len:8 PRP1 0x0 PRP2 0x0 00:35:51.642 [2024-11-20 18:02:44.785400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.642 [2024-11-20 18:02:44.785409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.642 [2024-11-20 18:02:44.785414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125016 len:8 PRP1 0x0 PRP2 0x0 00:35:51.642 [2024-11-20 18:02:44.785419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.642 [2024-11-20 18:02:44.785424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.642 [2024-11-20 18:02:44.785428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125024 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125032 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125040 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125048 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125056 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125064 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125072 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125080 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125088 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125096 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125104 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125112 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125120 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125128 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125136 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125144 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125152 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.643 [2024-11-20 18:02:44.785747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125160 len:8 PRP1 0x0 PRP2 0x0 00:35:51.643 [2024-11-20 18:02:44.785753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.643 [2024-11-20 18:02:44.785758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.643 [2024-11-20 18:02:44.785762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.644 [2024-11-20 18:02:44.785767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125168 len:8 PRP1 0x0 PRP2 0x0 00:35:51.644 [2024-11-20 18:02:44.785772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.644 [2024-11-20 18:02:44.785777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.644 [2024-11-20 18:02:44.785781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.644 [2024-11-20 18:02:44.785785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125176 len:8 PRP1 0x0 PRP2 0x0 00:35:51.644 [2024-11-20 18:02:44.785790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.644 [2024-11-20 18:02:44.785796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.644 [2024-11-20 18:02:44.785799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.644 [2024-11-20 18:02:44.785804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125184 len:8 PRP1 0x0 PRP2 0x0 00:35:51.644 [2024-11-20 18:02:44.785809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.644 [2024-11-20 18:02:44.785814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.644 [2024-11-20 18:02:44.785817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.644 [2024-11-20 18:02:44.785822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125192 len:8 PRP1 0x0 PRP2 0x0 00:35:51.644 [2024-11-20 18:02:44.785826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.644 [2024-11-20 18:02:44.785832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.644 [2024-11-20 18:02:44.785835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.644 [2024-11-20 18:02:44.785840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125200 len:8 PRP1 0x0 PRP2 0x0 00:35:51.644 [2024-11-20 18:02:44.785844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.644 [2024-11-20 18:02:44.785850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.644 [2024-11-20 18:02:44.785853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.644 [2024-11-20 18:02:44.785859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125208 len:8 PRP1 0x0 PRP2 0x0 00:35:51.644 [2024-11-20 18:02:44.785864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.644 [2024-11-20 18:02:44.785869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.644 [2024-11-20 18:02:44.785873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.644 [2024-11-20 18:02:44.785877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125216 len:8 PRP1 0x0 PRP2 0x0 00:35:51.644 [2024-11-20 18:02:44.785882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.644 [2024-11-20 18:02:44.785887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.644 [2024-11-20 18:02:44.785891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.644 [2024-11-20 18:02:44.785896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125224 len:8 PRP1 0x0 PRP2 0x0 00:35:51.644 [2024-11-20 18:02:44.785901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.644 [2024-11-20 18:02:44.785906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:51.644 [2024-11-20 18:02:44.785910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:51.644 [2024-11-20 18:02:44.785914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125232 len:8 PRP1 0x0 PRP2 0x0 00:35:51.644 [2024-11-20 18:02:44.785919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.644 [2024-11-20 18:02:44.785947] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6e9690 was disconnected and freed. reset controller. 00:35:51.644 [2024-11-20 18:02:44.785954] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:35:51.644 [2024-11-20 18:02:44.785970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.644 [2024-11-20 18:02:44.785976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.644 [2024-11-20 18:02:44.785982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.644 [2024-11-20 18:02:44.785988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.644 [2024-11-20 18:02:44.785994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.644 [2024-11-20 18:02:44.786000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.644 [2024-11-20 18:02:44.786006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.644 [2024-11-20 18:02:44.786011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.644 [2024-11-20 18:02:44.786017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:51.644 [2024-11-20 18:02:44.786035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6adf50 (9): Bad file descriptor 00:35:51.644 [2024-11-20 18:02:44.788466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:51.644 [2024-11-20 18:02:44.816295] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:51.644 11962.56 IOPS, 46.73 MiB/s [2024-11-20T17:02:51.560Z] 12078.20 IOPS, 47.18 MiB/s [2024-11-20T17:02:51.560Z] 12149.00 IOPS, 47.46 MiB/s [2024-11-20T17:02:51.560Z] 12232.75 IOPS, 47.78 MiB/s [2024-11-20T17:02:51.560Z] 12281.38 IOPS, 47.97 MiB/s [2024-11-20T17:02:51.560Z] 12334.07 IOPS, 48.18 MiB/s 00:35:51.644 Latency(us) 00:35:51.644 [2024-11-20T17:02:51.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.644 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:51.644 Verification LBA range: start 0x0 length 0x4000 00:35:51.644 NVMe0n1 : 15.00 12356.21 48.27 547.39 0.00 9897.99 542.72 15182.51 00:35:51.644 [2024-11-20T17:02:51.560Z] =================================================================================================================== 00:35:51.644 [2024-11-20T17:02:51.560Z] Total : 12356.21 48.27 547.39 0.00 9897.99 542.72 15182.51 00:35:51.644 Received shutdown signal, test time was about 15.000000 seconds 00:35:51.644 00:35:51.644 Latency(us) 00:35:51.644 [2024-11-20T17:02:51.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.644 [2024-11-20T17:02:51.560Z] =================================================================================================================== 00:35:51.644 [2024-11-20T17:02:51.560Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:51.644 18:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:35:51.644 18:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:35:51.644 18:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:35:51.644 18:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2871741 00:35:51.644 18:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2871741 /var/tmp/bdevperf.sock 00:35:51.644 18:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:35:51.644 18:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2871741 ']' 00:35:51.644 18:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:51.644 18:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:51.645 18:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:51.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:51.645 18:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:51.645 18:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:52.217 18:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:52.217 18:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:35:52.217 18:02:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:52.217 [2024-11-20 18:02:52.095441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:52.217 18:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:35:52.478 [2024-11-20 18:02:52.275902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:35:52.478 18:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:53.049 NVMe0n1 00:35:53.049 18:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:53.049 00:35:53.310 18:02:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:53.310 00:35:53.571 18:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:53.571 18:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:35:53.571 18:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:53.832 18:02:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:35:57.131 18:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:57.131 18:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:35:57.131 18:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2872816 00:35:57.131 18:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:57.131 18:02:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2872816 00:35:58.072 { 00:35:58.072 "results": [ 00:35:58.072 { 00:35:58.072 "job": "NVMe0n1", 00:35:58.072 "core_mask": "0x1", 00:35:58.072 "workload": "verify", 00:35:58.072 "status": "finished", 00:35:58.072 "verify_range": { 00:35:58.072 "start": 0, 00:35:58.072 "length": 16384 00:35:58.072 }, 00:35:58.072 "queue_depth": 128, 00:35:58.072 "io_size": 4096, 00:35:58.072 "runtime": 1.008041, 00:35:58.072 "iops": 13056.016570754562, 00:35:58.072 "mibps": 51.00006472951001, 00:35:58.072 "io_failed": 0, 00:35:58.072 "io_timeout": 0, 00:35:58.072 "avg_latency_us": 9767.05843021047, 00:35:58.072 "min_latency_us": 2143.5733333333333, 00:35:58.072 "max_latency_us": 7864.32 00:35:58.072 } 00:35:58.072 ], 00:35:58.072 "core_count": 1 00:35:58.072 } 00:35:58.072 18:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:58.072 [2024-11-20 18:02:51.144768] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:35:58.072 [2024-11-20 18:02:51.144828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2871741 ] 00:35:58.072 [2024-11-20 18:02:51.219804] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:58.072 [2024-11-20 18:02:51.246060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:58.072 [2024-11-20 18:02:53.550765] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:35:58.072 [2024-11-20 18:02:53.550801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:58.072 [2024-11-20 18:02:53.550809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:58.072 [2024-11-20 18:02:53.550817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:58.072 [2024-11-20 18:02:53.550822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:58.072 [2024-11-20 18:02:53.550828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:58.072 [2024-11-20 18:02:53.550833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:58.072 [2024-11-20 18:02:53.550839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:58.072 [2024-11-20 18:02:53.550845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:58.072 [2024-11-20 18:02:53.550855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:58.072 [2024-11-20 18:02:53.550876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:58.072 [2024-11-20 18:02:53.550887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b95f50 (9): Bad file descriptor 00:35:58.072 [2024-11-20 18:02:53.562034] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:58.072 Running I/O for 1 seconds... 00:35:58.072 13033.00 IOPS, 50.91 MiB/s 00:35:58.072 Latency(us) 00:35:58.072 [2024-11-20T17:02:57.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.072 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:58.072 Verification LBA range: start 0x0 length 0x4000 00:35:58.072 NVMe0n1 : 1.01 13056.02 51.00 0.00 0.00 9767.06 2143.57 7864.32 00:35:58.072 [2024-11-20T17:02:57.988Z] =================================================================================================================== 00:35:58.072 [2024-11-20T17:02:57.988Z] Total : 13056.02 51.00 0.00 0.00 9767.06 2143.57 7864.32 00:35:58.072 18:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:58.072 18:02:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:35:58.334 18:02:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:58.596 18:02:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:35:58.596 18:02:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:58.596 18:02:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:58.858 18:02:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:36:02.174 18:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:02.174 18:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:36:02.174 18:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2871741 00:36:02.174 18:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2871741 ']' 00:36:02.174 18:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2871741 00:36:02.174 18:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:36:02.174 18:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:02.174 18:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2871741 00:36:02.174 18:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:02.174 18:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:02.174 18:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2871741' 00:36:02.174 killing process with pid 2871741 00:36:02.174 18:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2871741 00:36:02.174 18:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2871741 00:36:02.174 18:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:36:02.174 18:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:02.435 rmmod nvme_tcp 00:36:02.435 rmmod nvme_fabrics 00:36:02.435 rmmod nvme_keyring 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 2868183 ']' 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 2868183 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2868183 ']' 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2868183 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2868183 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2868183' 00:36:02.435 killing process with pid 2868183 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2868183 00:36:02.435 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2868183 00:36:02.696 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:02.696 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:02.697 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:02.697 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:36:02.697 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:36:02.697 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:02.697 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:36:02.697 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:02.697 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:02.697 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.697 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:02.697 18:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.610 18:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:04.610 00:36:04.610 real 0m40.323s 00:36:04.610 user 2m3.565s 00:36:04.610 sys 0m8.882s 00:36:04.610 18:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:04.610 18:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:36:04.610 ************************************ 00:36:04.610 END TEST nvmf_failover 00:36:04.610 ************************************ 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.871 ************************************ 00:36:04.871 START TEST nvmf_host_discovery 00:36:04.871 ************************************ 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:36:04.871 * Looking for test storage... 00:36:04.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:04.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.871 --rc genhtml_branch_coverage=1 00:36:04.871 --rc genhtml_function_coverage=1 00:36:04.871 --rc genhtml_legend=1 00:36:04.871 --rc geninfo_all_blocks=1 00:36:04.871 --rc geninfo_unexecuted_blocks=1 00:36:04.871 00:36:04.871 ' 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:04.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.871 --rc genhtml_branch_coverage=1 00:36:04.871 --rc genhtml_function_coverage=1 00:36:04.871 --rc genhtml_legend=1 00:36:04.871 --rc geninfo_all_blocks=1 00:36:04.871 --rc geninfo_unexecuted_blocks=1 00:36:04.871 00:36:04.871 ' 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:04.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.871 --rc genhtml_branch_coverage=1 00:36:04.871 --rc genhtml_function_coverage=1 00:36:04.871 --rc genhtml_legend=1 00:36:04.871 --rc geninfo_all_blocks=1 00:36:04.871 --rc geninfo_unexecuted_blocks=1 00:36:04.871 00:36:04.871 ' 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:04.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.871 --rc genhtml_branch_coverage=1 00:36:04.871 --rc genhtml_function_coverage=1 00:36:04.871 --rc genhtml_legend=1 00:36:04.871 --rc geninfo_all_blocks=1 00:36:04.871 --rc geninfo_unexecuted_blocks=1 00:36:04.871 00:36:04.871 ' 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:04.871 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:05.132 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:05.132 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:05.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:36:05.133 18:03:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:13.274 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:13.274 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:13.274 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:13.275 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:13.275 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:13.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:13.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:36:13.275 00:36:13.275 --- 10.0.0.2 ping statistics --- 00:36:13.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.275 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:13.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:13.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:36:13.275 00:36:13.275 --- 10.0.0.1 ping statistics --- 00:36:13.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.275 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # return 0 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=2878602 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 2878602 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2878602 ']' 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:13.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:13.275 [2024-11-20 18:03:12.482686] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:36:13.275 [2024-11-20 18:03:12.482753] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:13.275 [2024-11-20 18:03:12.548432] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.275 [2024-11-20 18:03:12.591890] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:13.275 [2024-11-20 18:03:12.591940] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:13.275 [2024-11-20 18:03:12.591946] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:13.275 [2024-11-20 18:03:12.591951] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:13.275 [2024-11-20 18:03:12.591961] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:13.275 [2024-11-20 18:03:12.591985] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:13.275 [2024-11-20 18:03:12.725067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.275 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:13.276 [2024-11-20 18:03:12.737391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:13.276 null0 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:13.276 null1 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2878623 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2878623 /tmp/host.sock 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2878623 ']' 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:36:13.276 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:13.276 18:03:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:13.276 [2024-11-20 18:03:12.832103] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:36:13.276 [2024-11-20 18:03:12.832175] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2878623 ] 00:36:13.276 [2024-11-20 18:03:12.913948] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.276 [2024-11-20 18:03:12.960519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:13.848 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.109 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:14.110 18:03:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.110 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:36:14.110 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:14.110 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.110 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:14.110 [2024-11-20 18:03:14.008482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.110 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.110 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:36:14.110 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:14.110 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:14.110 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.110 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:14.110 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:14.110 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:36:14.371 18:03:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:36:14.942 [2024-11-20 18:03:14.684559] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:14.942 [2024-11-20 18:03:14.684593] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:14.942 [2024-11-20 18:03:14.684608] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:14.942 [2024-11-20 18:03:14.772869] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:36:15.209 [2024-11-20 18:03:15.000636] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:15.209 [2024-11-20 18:03:15.000673] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:15.526 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:15.853 [2024-11-20 18:03:15.564584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:15.853 [2024-11-20 18:03:15.565015] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:15.853 [2024-11-20 18:03:15.565053] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:15.853 [2024-11-20 18:03:15.653319] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:36:15.853 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:36:15.854 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.854 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:15.854 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:15.854 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:36:15.854 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:36:15.854 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.854 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:36:15.854 18:03:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:36:16.134 [2024-11-20 18:03:15.760450] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:16.134 [2024-11-20 18:03:15.760480] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:16.134 [2024-11-20 18:03:15.760485] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:17.079 [2024-11-20 18:03:16.840291] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:17.079 [2024-11-20 18:03:16.840313] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:17.079 [2024-11-20 18:03:16.842025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:17.079 [2024-11-20 18:03:16.842043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.079 [2024-11-20 18:03:16.842059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:17.079 [2024-11-20 18:03:16.842067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.079 [2024-11-20 18:03:16.842075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:17.079 [2024-11-20 18:03:16.842082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.079 [2024-11-20 18:03:16.842090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:17.079 [2024-11-20 18:03:16.842097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.079 [2024-11-20 18:03:16.842105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968970 is same with the state(6) to be set 00:36:17.079 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:17.080 [2024-11-20 18:03:16.852040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x968970 (9): Bad file descriptor 00:36:17.080 [2024-11-20 18:03:16.862079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:17.080 [2024-11-20 18:03:16.862328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.080 [2024-11-20 18:03:16.862344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x968970 with addr=10.0.0.2, port=4420 00:36:17.080 [2024-11-20 18:03:16.862352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968970 is same with the state(6) to be set 00:36:17.080 [2024-11-20 18:03:16.862365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x968970 (9): Bad file descriptor 00:36:17.080 [2024-11-20 18:03:16.862383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:17.080 [2024-11-20 18:03:16.862390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:17.080 [2024-11-20 18:03:16.862398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:17.080 [2024-11-20 18:03:16.862410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.080 [2024-11-20 18:03:16.872136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:17.080 [2024-11-20 18:03:16.872554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.080 [2024-11-20 18:03:16.872568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x968970 with addr=10.0.0.2, port=4420 00:36:17.080 [2024-11-20 18:03:16.872575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968970 is same with the state(6) to be set 00:36:17.080 [2024-11-20 18:03:16.872586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x968970 (9): Bad file descriptor 00:36:17.080 [2024-11-20 18:03:16.872604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:17.080 [2024-11-20 18:03:16.872611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:17.080 [2024-11-20 18:03:16.872618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:17.080 [2024-11-20 18:03:16.872628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:17.080 [2024-11-20 18:03:16.882190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:17.080 [2024-11-20 18:03:16.882512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.080 [2024-11-20 18:03:16.882524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x968970 with addr=10.0.0.2, port=4420 00:36:17.080 [2024-11-20 18:03:16.882531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968970 is same with the state(6) to be set 00:36:17.080 [2024-11-20 18:03:16.882542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x968970 (9): Bad file descriptor 00:36:17.080 [2024-11-20 18:03:16.882552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:17.080 [2024-11-20 18:03:16.882559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:17.080 [2024-11-20 18:03:16.882566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:17.080 [2024-11-20 18:03:16.882577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:17.080 [2024-11-20 18:03:16.892243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:17.080 [2024-11-20 18:03:16.892599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.080 [2024-11-20 18:03:16.892612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x968970 with addr=10.0.0.2, port=4420 00:36:17.080 [2024-11-20 18:03:16.892619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968970 is same with the state(6) to be set 00:36:17.080 [2024-11-20 18:03:16.892630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x968970 (9): Bad file descriptor 00:36:17.080 [2024-11-20 18:03:16.892641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:17.080 [2024-11-20 18:03:16.892647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:17.080 [2024-11-20 18:03:16.892654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:17.080 [2024-11-20 18:03:16.892665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:36:17.080 [2024-11-20 18:03:16.902297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:36:17.080 [2024-11-20 18:03:16.902643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.080 [2024-11-20 18:03:16.902655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x968970 with addr=10.0.0.2, port=4420 00:36:17.080 [2024-11-20 18:03:16.902663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968970 is same with the state(6) to be set 00:36:17.080 [2024-11-20 18:03:16.902673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x968970 (9): Bad file descriptor 00:36:17.080 [2024-11-20 18:03:16.902684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:17.080 [2024-11-20 18:03:16.902690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:17.080 [2024-11-20 18:03:16.902697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:17.080 [2024-11-20 18:03:16.902708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:17.080 [2024-11-20 18:03:16.912350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:17.080 [2024-11-20 18:03:16.912585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.080 [2024-11-20 18:03:16.912598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x968970 with addr=10.0.0.2, port=4420 00:36:17.080 [2024-11-20 18:03:16.912606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968970 is same with the state(6) to be set 00:36:17.080 [2024-11-20 18:03:16.912617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x968970 (9): Bad file descriptor 00:36:17.080 [2024-11-20 18:03:16.912627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:17.080 [2024-11-20 18:03:16.912634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:17.080 [2024-11-20 18:03:16.912642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:17.080 [2024-11-20 18:03:16.912652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:17.080 [2024-11-20 18:03:16.922406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:17.080 [2024-11-20 18:03:16.922705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.080 [2024-11-20 18:03:16.922717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x968970 with addr=10.0.0.2, port=4420 00:36:17.080 [2024-11-20 18:03:16.922725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968970 is same with the state(6) to be set 00:36:17.080 [2024-11-20 18:03:16.922740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x968970 (9): Bad file descriptor 00:36:17.080 [2024-11-20 18:03:16.922751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:17.080 [2024-11-20 18:03:16.922757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:17.080 [2024-11-20 18:03:16.922764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:17.080 [2024-11-20 18:03:16.922775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:17.080 [2024-11-20 18:03:16.929148] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:36:17.080 [2024-11-20 18:03:16.929171] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:17.080 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:17.081 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.343 18:03:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.343 18:03:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:18.730 [2024-11-20 18:03:18.253980] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:18.730 [2024-11-20 18:03:18.253995] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:18.730 [2024-11-20 18:03:18.254003] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:18.730 [2024-11-20 18:03:18.342251] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:36:18.730 [2024-11-20 18:03:18.406809] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:18.730 [2024-11-20 18:03:18.406832] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:18.730 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.730 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:18.730 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:36:18.730 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:18.730 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:18.730 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:18.730 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:18.730 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:18.730 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:18.730 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.730 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:18.730 request: 00:36:18.730 { 00:36:18.730 "name": "nvme", 00:36:18.730 "trtype": "tcp", 00:36:18.730 "traddr": "10.0.0.2", 00:36:18.730 "adrfam": "ipv4", 00:36:18.730 "trsvcid": "8009", 00:36:18.730 "hostnqn": "nqn.2021-12.io.spdk:test", 00:36:18.730 "wait_for_attach": true, 00:36:18.730 "method": "bdev_nvme_start_discovery", 00:36:18.730 "req_id": 1 00:36:18.730 } 00:36:18.730 Got JSON-RPC error response 00:36:18.730 response: 00:36:18.730 { 00:36:18.730 "code": -17, 00:36:18.730 "message": "File exists" 00:36:18.730 } 00:36:18.730 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:18.730 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:36:18.730 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:18.730 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:18.731 request: 00:36:18.731 { 00:36:18.731 "name": "nvme_second", 00:36:18.731 "trtype": "tcp", 00:36:18.731 "traddr": "10.0.0.2", 00:36:18.731 "adrfam": "ipv4", 00:36:18.731 "trsvcid": "8009", 00:36:18.731 "hostnqn": "nqn.2021-12.io.spdk:test", 00:36:18.731 "wait_for_attach": true, 00:36:18.731 "method": "bdev_nvme_start_discovery", 00:36:18.731 "req_id": 1 00:36:18.731 } 00:36:18.731 Got JSON-RPC error response 00:36:18.731 response: 00:36:18.731 { 00:36:18.731 "code": -17, 00:36:18.731 "message": "File exists" 00:36:18.731 } 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:18.731 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.992 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:18.992 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:36:18.992 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:36:18.992 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:36:18.992 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:18.992 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:18.993 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:18.993 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:18.993 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:36:18.993 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.993 18:03:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:19.933 [2024-11-20 18:03:19.666219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.933 [2024-11-20 18:03:19.666241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x966bc0 with addr=10.0.0.2, port=8010 00:36:19.933 [2024-11-20 18:03:19.666250] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:19.933 [2024-11-20 18:03:19.666256] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:19.934 [2024-11-20 18:03:19.666260] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:36:20.878 [2024-11-20 18:03:20.668569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.878 [2024-11-20 18:03:20.668591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a0de0 with addr=10.0.0.2, port=8010 00:36:20.878 [2024-11-20 18:03:20.668599] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:20.878 [2024-11-20 18:03:20.668604] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:20.878 [2024-11-20 18:03:20.668609] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:36:21.821 [2024-11-20 18:03:21.670597] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:36:21.821 request: 00:36:21.821 { 00:36:21.821 "name": "nvme_second", 00:36:21.821 "trtype": "tcp", 00:36:21.821 "traddr": "10.0.0.2", 00:36:21.821 "adrfam": "ipv4", 00:36:21.821 "trsvcid": "8010", 00:36:21.821 "hostnqn": "nqn.2021-12.io.spdk:test", 00:36:21.821 "wait_for_attach": false, 00:36:21.821 "attach_timeout_ms": 3000, 00:36:21.821 "method": "bdev_nvme_start_discovery", 00:36:21.821 "req_id": 1 00:36:21.821 } 00:36:21.821 Got JSON-RPC error response 00:36:21.821 response: 00:36:21.821 { 00:36:21.821 "code": -110, 00:36:21.821 "message": "Connection timed out" 00:36:21.821 } 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2878623 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:21.821 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:22.082 rmmod nvme_tcp 00:36:22.082 rmmod nvme_fabrics 00:36:22.082 rmmod nvme_keyring 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 2878602 ']' 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 2878602 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 2878602 ']' 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 2878602 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2878602 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2878602' 00:36:22.082 killing process with pid 2878602 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 2878602 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 2878602 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:22.082 18:03:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.627 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:24.627 00:36:24.627 real 0m19.488s 00:36:24.627 user 0m22.249s 00:36:24.627 sys 0m7.272s 00:36:24.627 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:24.627 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:24.627 ************************************ 00:36:24.627 END TEST nvmf_host_discovery 00:36:24.627 ************************************ 00:36:24.627 18:03:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:36:24.627 18:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:24.627 18:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:24.627 18:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.627 ************************************ 00:36:24.627 START TEST nvmf_host_multipath_status 00:36:24.627 ************************************ 00:36:24.627 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:36:24.627 * Looking for test storage... 00:36:24.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:24.627 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:24.627 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:24.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.628 --rc genhtml_branch_coverage=1 00:36:24.628 --rc genhtml_function_coverage=1 00:36:24.628 --rc genhtml_legend=1 00:36:24.628 --rc geninfo_all_blocks=1 00:36:24.628 --rc geninfo_unexecuted_blocks=1 00:36:24.628 00:36:24.628 ' 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:24.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.628 --rc genhtml_branch_coverage=1 00:36:24.628 --rc genhtml_function_coverage=1 00:36:24.628 --rc genhtml_legend=1 00:36:24.628 --rc geninfo_all_blocks=1 00:36:24.628 --rc geninfo_unexecuted_blocks=1 00:36:24.628 00:36:24.628 ' 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:24.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.628 --rc genhtml_branch_coverage=1 00:36:24.628 --rc genhtml_function_coverage=1 00:36:24.628 --rc genhtml_legend=1 00:36:24.628 --rc geninfo_all_blocks=1 00:36:24.628 --rc geninfo_unexecuted_blocks=1 00:36:24.628 00:36:24.628 ' 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:24.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.628 --rc genhtml_branch_coverage=1 00:36:24.628 --rc genhtml_function_coverage=1 00:36:24.628 --rc genhtml_legend=1 00:36:24.628 --rc geninfo_all_blocks=1 00:36:24.628 --rc geninfo_unexecuted_blocks=1 00:36:24.628 00:36:24.628 ' 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.628 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:24.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:36:24.629 18:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:32.769 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:32.769 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:32.769 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:32.769 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:32.769 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # is_hw=yes 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:32.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:32.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:36:32.770 00:36:32.770 --- 10.0.0.2 ping statistics --- 00:36:32.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:32.770 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:32.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:32.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:36:32.770 00:36:32.770 --- 10.0.0.1 ping statistics --- 00:36:32.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:32.770 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # return 0 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=2884761 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 2884761 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2884761 ']' 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:32.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:32.770 18:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:32.770 [2024-11-20 18:03:31.921344] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:36:32.770 [2024-11-20 18:03:31.921413] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:32.770 [2024-11-20 18:03:32.011699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:32.770 [2024-11-20 18:03:32.058117] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:32.770 [2024-11-20 18:03:32.058175] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:32.770 [2024-11-20 18:03:32.058184] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:32.770 [2024-11-20 18:03:32.058193] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:32.770 [2024-11-20 18:03:32.058198] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:32.770 [2024-11-20 18:03:32.058290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:32.770 [2024-11-20 18:03:32.058292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:33.031 18:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:33.031 18:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:36:33.031 18:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:33.031 18:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:33.031 18:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:33.031 18:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:33.031 18:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2884761 00:36:33.031 18:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:33.292 [2024-11-20 18:03:32.946896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:33.292 18:03:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:36:33.292 Malloc0 00:36:33.552 18:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:36:33.552 18:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:33.813 18:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:34.073 [2024-11-20 18:03:33.766009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:34.074 18:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:34.074 [2024-11-20 18:03:33.962531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:34.334 18:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2885119 00:36:34.334 18:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:36:34.335 18:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:34.335 18:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2885119 /var/tmp/bdevperf.sock 00:36:34.335 18:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2885119 ']' 00:36:34.335 18:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:34.335 18:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:34.335 18:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:34.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:34.335 18:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:34.335 18:03:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:35.277 18:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:35.277 18:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:36:35.277 18:03:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:36:35.277 18:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:36:35.538 Nvme0n1 00:36:35.798 18:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:36:36.059 Nvme0n1 00:36:36.059 18:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:36:36.059 18:03:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:36:38.600 18:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:36:38.600 18:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:36:38.600 18:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:38.600 18:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:36:39.543 18:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:36:39.543 18:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:39.543 18:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:39.543 18:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:39.803 18:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:39.803 18:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:39.803 18:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:39.803 18:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:39.803 18:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:39.803 18:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:39.803 18:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:39.803 18:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:40.063 18:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:40.063 18:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:40.063 18:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:40.063 18:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:40.324 18:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:40.324 18:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:40.324 18:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:40.324 18:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:40.324 18:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:40.324 18:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:40.324 18:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:40.324 18:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:40.585 18:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:40.585 18:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:36:40.585 18:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:40.845 18:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:41.105 18:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:36:42.047 18:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:36:42.047 18:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:42.047 18:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:42.047 18:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:42.307 18:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:42.307 18:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:42.307 18:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:42.307 18:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:42.307 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:42.308 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:42.308 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:42.308 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:42.568 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:42.568 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:42.568 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:42.568 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:42.828 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:42.828 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:42.828 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:42.829 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:42.829 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:42.829 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:42.829 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:42.829 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:43.089 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:43.089 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:36:43.089 18:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:43.349 18:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:36:43.609 18:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:36:44.550 18:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:36:44.550 18:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:44.550 18:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:44.550 18:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:44.550 18:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:44.550 18:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:44.550 18:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:44.550 18:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:44.810 18:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:44.810 18:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:44.810 18:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:44.810 18:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:45.069 18:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:45.069 18:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:45.069 18:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:45.069 18:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:45.329 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:45.329 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:45.329 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:45.329 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:45.329 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:45.329 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:45.329 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:45.329 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:45.589 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:45.589 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:36:45.589 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:45.849 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:45.849 18:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:36:47.234 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:36:47.234 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:47.234 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:47.234 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:47.234 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:47.234 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:47.234 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:47.234 18:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:47.234 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:47.234 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:47.234 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:47.234 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:47.494 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:47.494 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:47.494 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:47.494 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:47.755 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:47.755 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:47.755 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:47.755 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:47.755 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:47.755 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:47.755 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:47.755 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:48.015 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:48.015 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:36:48.015 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:36:48.276 18:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:48.276 18:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:36:49.659 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:36:49.659 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:49.659 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:49.659 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:49.659 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:49.659 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:49.659 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:49.659 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:49.659 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:49.659 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:49.659 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:49.659 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:49.919 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:49.919 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:49.919 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:49.919 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:50.179 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:50.179 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:36:50.179 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:50.179 18:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:50.179 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:50.179 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:50.179 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:50.179 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:50.439 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:50.439 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:36:50.439 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:36:50.700 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:50.700 18:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:36:52.084 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:36:52.084 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:52.084 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:52.084 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:52.084 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:52.085 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:52.085 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:52.085 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:52.085 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:52.085 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:52.085 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:52.085 18:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:52.345 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:52.345 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:52.346 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:52.346 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:52.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:52.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:36:52.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:52.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:52.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:52.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:52.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:52.606 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:52.866 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:52.866 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:36:53.127 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:36:53.127 18:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:36:53.127 18:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:53.388 18:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:36:54.329 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:36:54.329 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:54.329 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:54.329 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:54.589 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:54.589 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:54.589 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:54.589 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:54.849 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:54.850 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:54.850 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:54.850 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:54.850 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:54.850 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:54.850 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:54.850 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:55.109 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:55.109 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:55.109 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:55.109 18:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:55.370 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:55.370 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:55.370 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:55.370 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:55.370 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:55.370 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:36:55.370 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:55.629 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:55.888 18:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:36:56.827 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:36:56.827 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:56.827 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:56.827 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:57.086 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:57.086 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:57.086 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:57.086 18:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:57.346 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:57.346 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:57.346 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:57.346 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:57.346 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:57.346 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:57.346 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:57.346 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:57.606 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:57.606 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:57.606 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:57.606 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:57.866 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:57.866 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:57.866 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:57.866 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:57.866 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:57.866 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:36:57.866 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:58.125 18:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:36:58.386 18:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:36:59.326 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:36:59.326 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:59.326 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:59.326 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:59.585 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:59.585 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:59.585 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:59.585 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:59.585 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:59.586 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:59.586 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:59.586 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:59.846 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:59.846 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:59.846 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:59.846 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:00.107 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:00.107 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:00.107 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:00.107 18:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:00.367 18:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:00.367 18:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:37:00.367 18:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:00.367 18:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:00.367 18:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:00.367 18:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:37:00.367 18:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:37:00.628 18:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:37:00.888 18:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:37:01.831 18:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:37:01.831 18:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:37:01.831 18:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:01.831 18:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:02.091 18:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:02.091 18:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:37:02.091 18:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:02.091 18:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:02.091 18:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:02.091 18:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:02.092 18:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:02.092 18:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:02.351 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:02.351 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:02.351 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:02.351 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:02.610 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:02.610 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:02.610 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:02.610 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:02.870 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:02.870 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:37:02.870 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:02.870 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:02.870 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:02.870 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2885119 00:37:02.870 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2885119 ']' 00:37:02.870 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2885119 00:37:02.870 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:37:02.870 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:02.870 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2885119 00:37:03.184 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:37:03.184 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:37:03.184 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2885119' 00:37:03.184 killing process with pid 2885119 00:37:03.184 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2885119 00:37:03.184 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2885119 00:37:03.184 { 00:37:03.184 "results": [ 00:37:03.184 { 00:37:03.184 "job": "Nvme0n1", 00:37:03.184 "core_mask": "0x4", 00:37:03.184 "workload": "verify", 00:37:03.184 "status": "terminated", 00:37:03.184 "verify_range": { 00:37:03.184 "start": 0, 00:37:03.184 "length": 16384 00:37:03.184 }, 00:37:03.184 "queue_depth": 128, 00:37:03.184 "io_size": 4096, 00:37:03.184 "runtime": 26.77251, 00:37:03.184 "iops": 11831.501790455957, 00:37:03.184 "mibps": 46.21680386896858, 00:37:03.184 "io_failed": 0, 00:37:03.184 "io_timeout": 0, 00:37:03.184 "avg_latency_us": 10799.530870598783, 00:37:03.184 "min_latency_us": 377.17333333333335, 00:37:03.184 "max_latency_us": 3019898.88 00:37:03.184 } 00:37:03.184 ], 00:37:03.184 "core_count": 1 00:37:03.184 } 00:37:03.184 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2885119 00:37:03.184 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:37:03.184 [2024-11-20 18:03:34.040667] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:37:03.184 [2024-11-20 18:03:34.040742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885119 ] 00:37:03.184 [2024-11-20 18:03:34.121077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.184 [2024-11-20 18:03:34.168724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:03.184 [2024-11-20 18:03:35.808502] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:37:03.184 Running I/O for 90 seconds... 00:37:03.184 10408.00 IOPS, 40.66 MiB/s [2024-11-20T17:04:03.100Z] 10726.50 IOPS, 41.90 MiB/s [2024-11-20T17:04:03.100Z] 10774.00 IOPS, 42.09 MiB/s [2024-11-20T17:04:03.100Z] 11134.00 IOPS, 43.49 MiB/s [2024-11-20T17:04:03.100Z] 11447.00 IOPS, 44.71 MiB/s [2024-11-20T17:04:03.100Z] 11680.83 IOPS, 45.63 MiB/s [2024-11-20T17:04:03.100Z] 11846.43 IOPS, 46.28 MiB/s [2024-11-20T17:04:03.100Z] 11954.75 IOPS, 46.70 MiB/s [2024-11-20T17:04:03.100Z] 12017.78 IOPS, 46.94 MiB/s [2024-11-20T17:04:03.100Z] 12075.20 IOPS, 47.17 MiB/s [2024-11-20T17:04:03.100Z] 12117.91 IOPS, 47.34 MiB/s [2024-11-20T17:04:03.100Z] [2024-11-20 18:03:47.953616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.184 [2024-11-20 18:03:47.953651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:03.184 [2024-11-20 18:03:47.953680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.184 [2024-11-20 18:03:47.953687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:37:03.184 [2024-11-20 18:03:47.953698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.184 [2024-11-20 18:03:47.953704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:37:03.184 [2024-11-20 18:03:47.953714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.184 [2024-11-20 18:03:47.953719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:37:03.184 [2024-11-20 18:03:47.953730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.184 [2024-11-20 18:03:47.953735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:03.184 [2024-11-20 18:03:47.953745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.184 [2024-11-20 18:03:47.953750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:37:03.184 [2024-11-20 18:03:47.953761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.184 [2024-11-20 18:03:47.953766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:37:03.184 [2024-11-20 18:03:47.953776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.184 [2024-11-20 18:03:47.953781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:37:03.184 [2024-11-20 18:03:47.953791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.184 [2024-11-20 18:03:47.953796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:37:03.184 [2024-11-20 18:03:47.953812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.184 [2024-11-20 18:03:47.953818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:03.184 [2024-11-20 18:03:47.953828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.184 [2024-11-20 18:03:47.953833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:03.184 [2024-11-20 18:03:47.953844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.184 [2024-11-20 18:03:47.953849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:03.184 [2024-11-20 18:03:47.953859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.184 [2024-11-20 18:03:47.953864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:03.184 [2024-11-20 18:03:47.953875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.184 [2024-11-20 18:03:47.953880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:03.184 [2024-11-20 18:03:47.953890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.184 [2024-11-20 18:03:47.953895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:37:03.184 [2024-11-20 18:03:47.953905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.185 [2024-11-20 18:03:47.953910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.953921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.185 [2024-11-20 18:03:47.953926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.953936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.185 [2024-11-20 18:03:47.953941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.953951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.185 [2024-11-20 18:03:47.953956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.953967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.185 [2024-11-20 18:03:47.953972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.953982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.185 [2024-11-20 18:03:47.953988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.953999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.185 [2024-11-20 18:03:47.954005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.954016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.185 [2024-11-20 18:03:47.954021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.954032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.185 [2024-11-20 18:03:47.954037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.954047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.185 [2024-11-20 18:03:47.954052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.185 [2024-11-20 18:03:47.955266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.185 [2024-11-20 18:03:47.955412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:03.185 [2024-11-20 18:03:47.955799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.185 [2024-11-20 18:03:47.955805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.955819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.955824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.955838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.955843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.955857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.955862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.955876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.955883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.955897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.955903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.955917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.955922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.955936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.955941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.955956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.955961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.955975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.955980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.955994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.955999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.956013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.956018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.956032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.956037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.956052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.956057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.956132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.956139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.956156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.956165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.956181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.956186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.956204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.956210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.956226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.956231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.956247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:03:47.956252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.956267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.186 [2024-11-20 18:03:47.956273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.956289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.186 [2024-11-20 18:03:47.956295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.956312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.186 [2024-11-20 18:03:47.956317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.956333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.186 [2024-11-20 18:03:47.956339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.956354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.186 [2024-11-20 18:03:47.956359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.956375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.186 [2024-11-20 18:03:47.956381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:03:47.956397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.186 [2024-11-20 18:03:47.956402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:03.186 12098.83 IOPS, 47.26 MiB/s [2024-11-20T17:04:03.102Z] 11168.15 IOPS, 43.63 MiB/s [2024-11-20T17:04:03.102Z] 10370.43 IOPS, 40.51 MiB/s [2024-11-20T17:04:03.102Z] 9730.60 IOPS, 38.01 MiB/s [2024-11-20T17:04:03.102Z] 9923.31 IOPS, 38.76 MiB/s [2024-11-20T17:04:03.102Z] 10118.71 IOPS, 39.53 MiB/s [2024-11-20T17:04:03.102Z] 10482.28 IOPS, 40.95 MiB/s [2024-11-20T17:04:03.102Z] 10820.58 IOPS, 42.27 MiB/s [2024-11-20T17:04:03.102Z] 11015.85 IOPS, 43.03 MiB/s [2024-11-20T17:04:03.102Z] 11097.67 IOPS, 43.35 MiB/s [2024-11-20T17:04:03.102Z] 11178.18 IOPS, 43.66 MiB/s [2024-11-20T17:04:03.102Z] 11408.17 IOPS, 44.56 MiB/s [2024-11-20T17:04:03.102Z] 11633.50 IOPS, 45.44 MiB/s [2024-11-20T17:04:03.102Z] [2024-11-20 18:04:00.571886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:04:00.571921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:04:00.571955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:04:00.571962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:04:00.571972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:04:00.571978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:04:00.571988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:04:00.571993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:04:00.572004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:04:00.572009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:04:00.572019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:04:00.572024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:04:00.572035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:04:00.572040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:04:00.572051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:04:00.572056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:04:00.572066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:04:00.572072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:04:00.572082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.186 [2024-11-20 18:04:00.572087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:03.186 [2024-11-20 18:04:00.572097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.187 [2024-11-20 18:04:00.572889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.572992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.187 [2024-11-20 18:04:00.572997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.573008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.187 [2024-11-20 18:04:00.573013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:03.187 [2024-11-20 18:04:00.573024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.188 [2024-11-20 18:04:00.573029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.573040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.188 [2024-11-20 18:04:00.573045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.573057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.188 [2024-11-20 18:04:00.573063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.573073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.188 [2024-11-20 18:04:00.573078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.573088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.188 [2024-11-20 18:04:00.573094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.573104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.188 [2024-11-20 18:04:00.573109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.573120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.188 [2024-11-20 18:04:00.573125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.573135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.188 [2024-11-20 18:04:00.573140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.573150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.188 [2024-11-20 18:04:00.573155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.573172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.188 [2024-11-20 18:04:00.573177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.573187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.188 [2024-11-20 18:04:00.573192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.573202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.188 [2024-11-20 18:04:00.573207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.573218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:03.188 [2024-11-20 18:04:00.573223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.574575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.188 [2024-11-20 18:04:00.574588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.574602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.188 [2024-11-20 18:04:00.574608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.574618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.188 [2024-11-20 18:04:00.574623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.574634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.188 [2024-11-20 18:04:00.574639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.574649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.188 [2024-11-20 18:04:00.574655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.574665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.188 [2024-11-20 18:04:00.574670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.574681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.188 [2024-11-20 18:04:00.574686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.574696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.188 [2024-11-20 18:04:00.574701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.574711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.188 [2024-11-20 18:04:00.574717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.574727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.188 [2024-11-20 18:04:00.574732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:03.188 [2024-11-20 18:04:00.574742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:03.188 [2024-11-20 18:04:00.574747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:03.188 11769.52 IOPS, 45.97 MiB/s [2024-11-20T17:04:03.104Z] 11810.50 IOPS, 46.13 MiB/s [2024-11-20T17:04:03.104Z] Received shutdown signal, test time was about 26.773118 seconds 00:37:03.188 00:37:03.188 Latency(us) 00:37:03.188 [2024-11-20T17:04:03.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:03.188 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:37:03.188 Verification LBA range: start 0x0 length 0x4000 00:37:03.188 Nvme0n1 : 26.77 11831.50 46.22 0.00 0.00 10799.53 377.17 3019898.88 00:37:03.188 [2024-11-20T17:04:03.104Z] =================================================================================================================== 00:37:03.188 [2024-11-20T17:04:03.104Z] Total : 11831.50 46.22 0.00 0.00 10799.53 377.17 3019898.88 00:37:03.188 [2024-11-20 18:04:02.819371] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:37:03.188 18:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:03.477 rmmod nvme_tcp 00:37:03.477 rmmod nvme_fabrics 00:37:03.477 rmmod nvme_keyring 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 2884761 ']' 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 2884761 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2884761 ']' 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2884761 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2884761 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2884761' 00:37:03.477 killing process with pid 2884761 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2884761 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2884761 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:03.477 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:37:03.738 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:03.738 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:03.738 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:03.738 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:03.738 18:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:05.650 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:05.650 00:37:05.650 real 0m41.342s 00:37:05.650 user 1m46.882s 00:37:05.650 sys 0m11.610s 00:37:05.650 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:05.650 18:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:05.650 ************************************ 00:37:05.650 END TEST nvmf_host_multipath_status 00:37:05.650 ************************************ 00:37:05.650 18:04:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:37:05.650 18:04:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:05.650 18:04:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:05.650 18:04:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.650 ************************************ 00:37:05.650 START TEST nvmf_discovery_remove_ifc 00:37:05.650 ************************************ 00:37:05.650 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:37:05.912 * Looking for test storage... 00:37:05.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:37:05.912 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:05.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:05.913 --rc genhtml_branch_coverage=1 00:37:05.913 --rc genhtml_function_coverage=1 00:37:05.913 --rc genhtml_legend=1 00:37:05.913 --rc geninfo_all_blocks=1 00:37:05.913 --rc geninfo_unexecuted_blocks=1 00:37:05.913 00:37:05.913 ' 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:05.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:05.913 --rc genhtml_branch_coverage=1 00:37:05.913 --rc genhtml_function_coverage=1 00:37:05.913 --rc genhtml_legend=1 00:37:05.913 --rc geninfo_all_blocks=1 00:37:05.913 --rc geninfo_unexecuted_blocks=1 00:37:05.913 00:37:05.913 ' 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:05.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:05.913 --rc genhtml_branch_coverage=1 00:37:05.913 --rc genhtml_function_coverage=1 00:37:05.913 --rc genhtml_legend=1 00:37:05.913 --rc geninfo_all_blocks=1 00:37:05.913 --rc geninfo_unexecuted_blocks=1 00:37:05.913 00:37:05.913 ' 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:05.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:05.913 --rc genhtml_branch_coverage=1 00:37:05.913 --rc genhtml_function_coverage=1 00:37:05.913 --rc genhtml_legend=1 00:37:05.913 --rc geninfo_all_blocks=1 00:37:05.913 --rc geninfo_unexecuted_blocks=1 00:37:05.913 00:37:05.913 ' 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:05.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:37:05.913 18:04:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:14.056 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:14.056 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:14.056 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:14.057 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:14.057 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # is_hw=yes 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:14.057 18:04:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:14.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:14.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:37:14.057 00:37:14.057 --- 10.0.0.2 ping statistics --- 00:37:14.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.057 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:14.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:14.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:37:14.057 00:37:14.057 --- 10.0.0.1 ping statistics --- 00:37:14.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.057 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # return 0 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=2894898 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 2894898 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2894898 ']' 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:14.057 18:04:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:14.057 [2024-11-20 18:04:13.337155] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:37:14.057 [2024-11-20 18:04:13.337231] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:14.057 [2024-11-20 18:04:13.425950] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:14.057 [2024-11-20 18:04:13.471409] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:14.057 [2024-11-20 18:04:13.471466] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:14.057 [2024-11-20 18:04:13.471475] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:14.057 [2024-11-20 18:04:13.471482] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:14.057 [2024-11-20 18:04:13.471488] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:14.057 [2024-11-20 18:04:13.471509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.319 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:14.319 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:37:14.319 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:14.319 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:14.319 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:14.319 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:14.319 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:37:14.319 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.319 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:14.319 [2024-11-20 18:04:14.212964] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:14.319 [2024-11-20 18:04:14.221274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:37:14.580 null0 00:37:14.580 [2024-11-20 18:04:14.253181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:14.580 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.580 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2894945 00:37:14.580 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:37:14.580 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2894945 /tmp/host.sock 00:37:14.580 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2894945 ']' 00:37:14.580 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:37:14.580 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:14.581 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:37:14.581 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:37:14.581 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:14.581 18:04:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:14.581 [2024-11-20 18:04:14.329911] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:37:14.581 [2024-11-20 18:04:14.329972] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894945 ] 00:37:14.581 [2024-11-20 18:04:14.412760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:14.581 [2024-11-20 18:04:14.460307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:15.523 18:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:15.523 18:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:37:15.523 18:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:15.523 18:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:37:15.523 18:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.523 18:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:15.523 18:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.523 18:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:37:15.523 18:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.523 18:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:15.523 18:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.523 18:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:37:15.523 18:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.523 18:04:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:16.465 [2024-11-20 18:04:16.292074] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:37:16.465 [2024-11-20 18:04:16.292102] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:37:16.466 [2024-11-20 18:04:16.292115] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:16.726 [2024-11-20 18:04:16.421521] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:37:16.987 [2024-11-20 18:04:16.645408] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:37:16.987 [2024-11-20 18:04:16.645458] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:37:16.987 [2024-11-20 18:04:16.645481] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:37:16.987 [2024-11-20 18:04:16.645495] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:37:16.987 [2024-11-20 18:04:16.645516] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.987 [2024-11-20 18:04:16.691938] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2332690 was disconnected and freed. delete nvme_qpair. 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:16.987 18:04:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:18.372 18:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:18.372 18:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:18.372 18:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:18.372 18:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.372 18:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:18.372 18:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:18.372 18:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:18.372 18:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.372 18:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:18.372 18:04:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:19.314 18:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:19.314 18:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:19.314 18:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:19.314 18:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.314 18:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:19.314 18:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:19.314 18:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:19.314 18:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.314 18:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:19.314 18:04:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:20.256 18:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:20.256 18:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:20.256 18:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:20.256 18:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.256 18:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:20.256 18:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:20.256 18:04:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:20.256 18:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.256 18:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:20.256 18:04:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:21.199 18:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:21.199 18:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:21.199 18:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:21.199 18:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.199 18:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:21.199 18:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:21.199 18:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:21.199 18:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.199 18:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:21.199 18:04:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:22.585 18:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:22.585 18:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:22.585 18:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:22.585 18:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.585 18:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:22.585 18:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:22.585 18:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:22.585 [2024-11-20 18:04:22.096022] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:37:22.585 [2024-11-20 18:04:22.096057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:22.585 [2024-11-20 18:04:22.096067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.585 [2024-11-20 18:04:22.096074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:22.585 [2024-11-20 18:04:22.096080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.585 [2024-11-20 18:04:22.096086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:22.585 [2024-11-20 18:04:22.096091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.585 [2024-11-20 18:04:22.096096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:22.585 [2024-11-20 18:04:22.096102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.585 [2024-11-20 18:04:22.096108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:37:22.585 [2024-11-20 18:04:22.096114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.585 [2024-11-20 18:04:22.096119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230edf0 is same with the state(6) to be set 00:37:22.585 [2024-11-20 18:04:22.106043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230edf0 (9): Bad file descriptor 00:37:22.585 [2024-11-20 18:04:22.116078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:22.585 18:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.585 18:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:22.585 18:04:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:23.527 [2024-11-20 18:04:23.131215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:37:23.527 [2024-11-20 18:04:23.131306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230edf0 with addr=10.0.0.2, port=4420 00:37:23.527 [2024-11-20 18:04:23.131338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230edf0 is same with the state(6) to be set 00:37:23.527 [2024-11-20 18:04:23.131390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230edf0 (9): Bad file descriptor 00:37:23.527 [2024-11-20 18:04:23.131498] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:37:23.527 [2024-11-20 18:04:23.131555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:23.527 [2024-11-20 18:04:23.131577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:23.527 [2024-11-20 18:04:23.131600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:23.527 [2024-11-20 18:04:23.131642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:23.527 [2024-11-20 18:04:23.131664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:23.527 18:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:23.527 18:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:23.527 18:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:23.527 18:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.527 18:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:23.527 18:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:23.527 18:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:23.527 18:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.527 18:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:23.527 18:04:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:24.470 [2024-11-20 18:04:24.134068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:24.470 [2024-11-20 18:04:24.134085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:24.470 [2024-11-20 18:04:24.134091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:24.470 [2024-11-20 18:04:24.134096] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:37:24.470 [2024-11-20 18:04:24.134105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:24.470 [2024-11-20 18:04:24.134120] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:37:24.470 [2024-11-20 18:04:24.134136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:24.470 [2024-11-20 18:04:24.134143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.470 [2024-11-20 18:04:24.134150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:24.470 [2024-11-20 18:04:24.134155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.470 [2024-11-20 18:04:24.134169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:24.470 [2024-11-20 18:04:24.134174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.470 [2024-11-20 18:04:24.134180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:24.470 [2024-11-20 18:04:24.134185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.470 [2024-11-20 18:04:24.134190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:37:24.470 [2024-11-20 18:04:24.134196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.470 [2024-11-20 18:04:24.134200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:37:24.470 [2024-11-20 18:04:24.134926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fe4c0 (9): Bad file descriptor 00:37:24.470 [2024-11-20 18:04:24.135935] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:37:24.470 [2024-11-20 18:04:24.135943] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:24.470 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.731 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:37:24.731 18:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:25.673 18:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:25.673 18:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:25.673 18:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:25.673 18:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.673 18:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:25.673 18:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:25.673 18:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:25.673 18:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.673 18:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:37:25.673 18:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:26.617 [2024-11-20 18:04:26.187094] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:37:26.617 [2024-11-20 18:04:26.187108] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:37:26.617 [2024-11-20 18:04:26.187116] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:26.617 [2024-11-20 18:04:26.316507] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:37:26.617 [2024-11-20 18:04:26.376667] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:37:26.617 [2024-11-20 18:04:26.376695] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:37:26.617 [2024-11-20 18:04:26.376710] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:37:26.617 [2024-11-20 18:04:26.376720] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:37:26.617 [2024-11-20 18:04:26.376726] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:37:26.618 [2024-11-20 18:04:26.385031] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x230da90 was disconnected and freed. delete nvme_qpair. 00:37:26.618 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:26.618 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:26.618 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:26.618 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.618 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:26.618 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:26.618 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:26.618 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.618 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:37:26.618 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:37:26.618 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2894945 00:37:26.618 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2894945 ']' 00:37:26.618 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2894945 00:37:26.618 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:37:26.618 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2894945 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2894945' 00:37:26.879 killing process with pid 2894945 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2894945 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2894945 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:26.879 rmmod nvme_tcp 00:37:26.879 rmmod nvme_fabrics 00:37:26.879 rmmod nvme_keyring 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 2894898 ']' 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 2894898 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2894898 ']' 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2894898 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:26.879 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2894898 00:37:27.140 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:27.140 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:27.141 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2894898' 00:37:27.141 killing process with pid 2894898 00:37:27.141 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2894898 00:37:27.141 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2894898 00:37:27.141 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:27.141 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:27.141 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:27.141 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:37:27.141 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:37:27.141 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:27.141 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:37:27.141 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:27.141 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:27.141 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:27.141 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:27.141 18:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:29.686 00:37:29.686 real 0m23.539s 00:37:29.686 user 0m27.550s 00:37:29.686 sys 0m7.311s 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:29.686 ************************************ 00:37:29.686 END TEST nvmf_discovery_remove_ifc 00:37:29.686 ************************************ 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.686 ************************************ 00:37:29.686 START TEST nvmf_identify_kernel_target 00:37:29.686 ************************************ 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:37:29.686 * Looking for test storage... 00:37:29.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:29.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.686 --rc genhtml_branch_coverage=1 00:37:29.686 --rc genhtml_function_coverage=1 00:37:29.686 --rc genhtml_legend=1 00:37:29.686 --rc geninfo_all_blocks=1 00:37:29.686 --rc geninfo_unexecuted_blocks=1 00:37:29.686 00:37:29.686 ' 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:29.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.686 --rc genhtml_branch_coverage=1 00:37:29.686 --rc genhtml_function_coverage=1 00:37:29.686 --rc genhtml_legend=1 00:37:29.686 --rc geninfo_all_blocks=1 00:37:29.686 --rc geninfo_unexecuted_blocks=1 00:37:29.686 00:37:29.686 ' 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:29.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.686 --rc genhtml_branch_coverage=1 00:37:29.686 --rc genhtml_function_coverage=1 00:37:29.686 --rc genhtml_legend=1 00:37:29.686 --rc geninfo_all_blocks=1 00:37:29.686 --rc geninfo_unexecuted_blocks=1 00:37:29.686 00:37:29.686 ' 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:29.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.686 --rc genhtml_branch_coverage=1 00:37:29.686 --rc genhtml_function_coverage=1 00:37:29.686 --rc genhtml_legend=1 00:37:29.686 --rc geninfo_all_blocks=1 00:37:29.686 --rc geninfo_unexecuted_blocks=1 00:37:29.686 00:37:29.686 ' 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:29.686 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:29.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:37:29.687 18:04:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:37.829 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:37.829 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:37.830 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:37.830 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:37.830 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # is_hw=yes 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:37.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:37.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:37:37.830 00:37:37.830 --- 10.0.0.2 ping statistics --- 00:37:37.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.830 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:37.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:37.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:37:37.830 00:37:37.830 --- 10.0.0.1 ping statistics --- 00:37:37.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.830 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # return 0 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:37.830 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:37.831 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:37.831 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:37.831 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:37.831 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:37:37.831 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:37.831 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:37.831 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:37:37.831 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:37.831 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:37.831 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:37.831 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:37:37.831 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:37:37.831 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:37:37.831 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:37.831 18:04:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:40.378 Waiting for block devices as requested 00:37:40.640 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:40.640 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:40.640 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:40.901 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:40.901 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:40.901 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:41.161 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:41.161 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:41.161 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:41.422 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:41.422 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:41.683 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:41.683 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:41.683 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:41.943 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:41.943 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:41.943 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:42.205 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:37:42.205 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:42.205 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:37:42.205 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:42.205 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:42.205 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:42.205 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:37:42.205 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:42.205 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:42.466 No valid GPT data, bailing 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:42.466 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:42.466 00:37:42.466 Discovery Log Number of Records 2, Generation counter 2 00:37:42.466 =====Discovery Log Entry 0====== 00:37:42.466 trtype: tcp 00:37:42.466 adrfam: ipv4 00:37:42.466 subtype: current discovery subsystem 00:37:42.466 treq: not specified, sq flow control disable supported 00:37:42.466 portid: 1 00:37:42.466 trsvcid: 4420 00:37:42.466 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:42.466 traddr: 10.0.0.1 00:37:42.466 eflags: none 00:37:42.466 sectype: none 00:37:42.466 =====Discovery Log Entry 1====== 00:37:42.466 trtype: tcp 00:37:42.466 adrfam: ipv4 00:37:42.466 subtype: nvme subsystem 00:37:42.466 treq: not specified, sq flow control disable supported 00:37:42.466 portid: 1 00:37:42.466 trsvcid: 4420 00:37:42.466 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:42.466 traddr: 10.0.0.1 00:37:42.467 eflags: none 00:37:42.467 sectype: none 00:37:42.467 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:37:42.467 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:37:42.467 ===================================================== 00:37:42.467 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:37:42.467 ===================================================== 00:37:42.467 Controller Capabilities/Features 00:37:42.467 ================================ 00:37:42.467 Vendor ID: 0000 00:37:42.467 Subsystem Vendor ID: 0000 00:37:42.467 Serial Number: d1fe49a20bfda8a17d62 00:37:42.467 Model Number: Linux 00:37:42.467 Firmware Version: 6.8.9-20 00:37:42.467 Recommended Arb Burst: 0 00:37:42.467 IEEE OUI Identifier: 00 00 00 00:37:42.467 Multi-path I/O 00:37:42.467 May have multiple subsystem ports: No 00:37:42.467 May have multiple controllers: No 00:37:42.467 Associated with SR-IOV VF: No 00:37:42.467 Max Data Transfer Size: Unlimited 00:37:42.467 Max Number of Namespaces: 0 00:37:42.467 Max Number of I/O Queues: 1024 00:37:42.467 NVMe Specification Version (VS): 1.3 00:37:42.467 NVMe Specification Version (Identify): 1.3 00:37:42.467 Maximum Queue Entries: 1024 00:37:42.467 Contiguous Queues Required: No 00:37:42.467 Arbitration Mechanisms Supported 00:37:42.467 Weighted Round Robin: Not Supported 00:37:42.467 Vendor Specific: Not Supported 00:37:42.467 Reset Timeout: 7500 ms 00:37:42.467 Doorbell Stride: 4 bytes 00:37:42.467 NVM Subsystem Reset: Not Supported 00:37:42.467 Command Sets Supported 00:37:42.467 NVM Command Set: Supported 00:37:42.467 Boot Partition: Not Supported 00:37:42.467 Memory Page Size Minimum: 4096 bytes 00:37:42.467 Memory Page Size Maximum: 4096 bytes 00:37:42.467 Persistent Memory Region: Not Supported 00:37:42.467 Optional Asynchronous Events Supported 00:37:42.467 Namespace Attribute Notices: Not Supported 00:37:42.467 Firmware Activation Notices: Not Supported 00:37:42.467 ANA Change Notices: Not Supported 00:37:42.467 PLE Aggregate Log Change Notices: Not Supported 00:37:42.467 LBA Status Info Alert Notices: Not Supported 00:37:42.467 EGE Aggregate Log Change Notices: Not Supported 00:37:42.467 Normal NVM Subsystem Shutdown event: Not Supported 00:37:42.467 Zone Descriptor Change Notices: Not Supported 00:37:42.467 Discovery Log Change Notices: Supported 00:37:42.467 Controller Attributes 00:37:42.467 128-bit Host Identifier: Not Supported 00:37:42.467 Non-Operational Permissive Mode: Not Supported 00:37:42.467 NVM Sets: Not Supported 00:37:42.467 Read Recovery Levels: Not Supported 00:37:42.467 Endurance Groups: Not Supported 00:37:42.467 Predictable Latency Mode: Not Supported 00:37:42.467 Traffic Based Keep ALive: Not Supported 00:37:42.467 Namespace Granularity: Not Supported 00:37:42.467 SQ Associations: Not Supported 00:37:42.467 UUID List: Not Supported 00:37:42.467 Multi-Domain Subsystem: Not Supported 00:37:42.467 Fixed Capacity Management: Not Supported 00:37:42.467 Variable Capacity Management: Not Supported 00:37:42.467 Delete Endurance Group: Not Supported 00:37:42.467 Delete NVM Set: Not Supported 00:37:42.467 Extended LBA Formats Supported: Not Supported 00:37:42.467 Flexible Data Placement Supported: Not Supported 00:37:42.467 00:37:42.467 Controller Memory Buffer Support 00:37:42.467 ================================ 00:37:42.467 Supported: No 00:37:42.467 00:37:42.467 Persistent Memory Region Support 00:37:42.467 ================================ 00:37:42.467 Supported: No 00:37:42.467 00:37:42.467 Admin Command Set Attributes 00:37:42.467 ============================ 00:37:42.467 Security Send/Receive: Not Supported 00:37:42.467 Format NVM: Not Supported 00:37:42.467 Firmware Activate/Download: Not Supported 00:37:42.467 Namespace Management: Not Supported 00:37:42.467 Device Self-Test: Not Supported 00:37:42.467 Directives: Not Supported 00:37:42.467 NVMe-MI: Not Supported 00:37:42.467 Virtualization Management: Not Supported 00:37:42.467 Doorbell Buffer Config: Not Supported 00:37:42.467 Get LBA Status Capability: Not Supported 00:37:42.467 Command & Feature Lockdown Capability: Not Supported 00:37:42.467 Abort Command Limit: 1 00:37:42.467 Async Event Request Limit: 1 00:37:42.467 Number of Firmware Slots: N/A 00:37:42.467 Firmware Slot 1 Read-Only: N/A 00:37:42.467 Firmware Activation Without Reset: N/A 00:37:42.467 Multiple Update Detection Support: N/A 00:37:42.467 Firmware Update Granularity: No Information Provided 00:37:42.467 Per-Namespace SMART Log: No 00:37:42.467 Asymmetric Namespace Access Log Page: Not Supported 00:37:42.467 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:37:42.467 Command Effects Log Page: Not Supported 00:37:42.467 Get Log Page Extended Data: Supported 00:37:42.467 Telemetry Log Pages: Not Supported 00:37:42.467 Persistent Event Log Pages: Not Supported 00:37:42.467 Supported Log Pages Log Page: May Support 00:37:42.467 Commands Supported & Effects Log Page: Not Supported 00:37:42.467 Feature Identifiers & Effects Log Page:May Support 00:37:42.467 NVMe-MI Commands & Effects Log Page: May Support 00:37:42.467 Data Area 4 for Telemetry Log: Not Supported 00:37:42.467 Error Log Page Entries Supported: 1 00:37:42.467 Keep Alive: Not Supported 00:37:42.467 00:37:42.467 NVM Command Set Attributes 00:37:42.467 ========================== 00:37:42.467 Submission Queue Entry Size 00:37:42.467 Max: 1 00:37:42.467 Min: 1 00:37:42.467 Completion Queue Entry Size 00:37:42.467 Max: 1 00:37:42.467 Min: 1 00:37:42.467 Number of Namespaces: 0 00:37:42.467 Compare Command: Not Supported 00:37:42.467 Write Uncorrectable Command: Not Supported 00:37:42.467 Dataset Management Command: Not Supported 00:37:42.467 Write Zeroes Command: Not Supported 00:37:42.467 Set Features Save Field: Not Supported 00:37:42.467 Reservations: Not Supported 00:37:42.467 Timestamp: Not Supported 00:37:42.467 Copy: Not Supported 00:37:42.467 Volatile Write Cache: Not Present 00:37:42.467 Atomic Write Unit (Normal): 1 00:37:42.467 Atomic Write Unit (PFail): 1 00:37:42.467 Atomic Compare & Write Unit: 1 00:37:42.467 Fused Compare & Write: Not Supported 00:37:42.467 Scatter-Gather List 00:37:42.467 SGL Command Set: Supported 00:37:42.467 SGL Keyed: Not Supported 00:37:42.467 SGL Bit Bucket Descriptor: Not Supported 00:37:42.467 SGL Metadata Pointer: Not Supported 00:37:42.467 Oversized SGL: Not Supported 00:37:42.467 SGL Metadata Address: Not Supported 00:37:42.467 SGL Offset: Supported 00:37:42.467 Transport SGL Data Block: Not Supported 00:37:42.467 Replay Protected Memory Block: Not Supported 00:37:42.467 00:37:42.468 Firmware Slot Information 00:37:42.468 ========================= 00:37:42.468 Active slot: 0 00:37:42.468 00:37:42.468 00:37:42.468 Error Log 00:37:42.468 ========= 00:37:42.468 00:37:42.468 Active Namespaces 00:37:42.468 ================= 00:37:42.468 Discovery Log Page 00:37:42.468 ================== 00:37:42.468 Generation Counter: 2 00:37:42.468 Number of Records: 2 00:37:42.468 Record Format: 0 00:37:42.468 00:37:42.468 Discovery Log Entry 0 00:37:42.468 ---------------------- 00:37:42.468 Transport Type: 3 (TCP) 00:37:42.468 Address Family: 1 (IPv4) 00:37:42.468 Subsystem Type: 3 (Current Discovery Subsystem) 00:37:42.468 Entry Flags: 00:37:42.468 Duplicate Returned Information: 0 00:37:42.468 Explicit Persistent Connection Support for Discovery: 0 00:37:42.468 Transport Requirements: 00:37:42.468 Secure Channel: Not Specified 00:37:42.468 Port ID: 1 (0x0001) 00:37:42.468 Controller ID: 65535 (0xffff) 00:37:42.468 Admin Max SQ Size: 32 00:37:42.468 Transport Service Identifier: 4420 00:37:42.468 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:37:42.468 Transport Address: 10.0.0.1 00:37:42.468 Discovery Log Entry 1 00:37:42.468 ---------------------- 00:37:42.468 Transport Type: 3 (TCP) 00:37:42.468 Address Family: 1 (IPv4) 00:37:42.468 Subsystem Type: 2 (NVM Subsystem) 00:37:42.468 Entry Flags: 00:37:42.468 Duplicate Returned Information: 0 00:37:42.468 Explicit Persistent Connection Support for Discovery: 0 00:37:42.468 Transport Requirements: 00:37:42.468 Secure Channel: Not Specified 00:37:42.468 Port ID: 1 (0x0001) 00:37:42.468 Controller ID: 65535 (0xffff) 00:37:42.468 Admin Max SQ Size: 32 00:37:42.468 Transport Service Identifier: 4420 00:37:42.468 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:37:42.468 Transport Address: 10.0.0.1 00:37:42.468 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:42.729 get_feature(0x01) failed 00:37:42.729 get_feature(0x02) failed 00:37:42.729 get_feature(0x04) failed 00:37:42.729 ===================================================== 00:37:42.729 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:42.729 ===================================================== 00:37:42.729 Controller Capabilities/Features 00:37:42.729 ================================ 00:37:42.729 Vendor ID: 0000 00:37:42.729 Subsystem Vendor ID: 0000 00:37:42.729 Serial Number: 25ed343075aa5a069457 00:37:42.729 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:37:42.729 Firmware Version: 6.8.9-20 00:37:42.729 Recommended Arb Burst: 6 00:37:42.729 IEEE OUI Identifier: 00 00 00 00:37:42.729 Multi-path I/O 00:37:42.729 May have multiple subsystem ports: Yes 00:37:42.729 May have multiple controllers: Yes 00:37:42.729 Associated with SR-IOV VF: No 00:37:42.729 Max Data Transfer Size: Unlimited 00:37:42.729 Max Number of Namespaces: 1024 00:37:42.729 Max Number of I/O Queues: 128 00:37:42.729 NVMe Specification Version (VS): 1.3 00:37:42.729 NVMe Specification Version (Identify): 1.3 00:37:42.729 Maximum Queue Entries: 1024 00:37:42.729 Contiguous Queues Required: No 00:37:42.729 Arbitration Mechanisms Supported 00:37:42.729 Weighted Round Robin: Not Supported 00:37:42.729 Vendor Specific: Not Supported 00:37:42.729 Reset Timeout: 7500 ms 00:37:42.729 Doorbell Stride: 4 bytes 00:37:42.729 NVM Subsystem Reset: Not Supported 00:37:42.729 Command Sets Supported 00:37:42.729 NVM Command Set: Supported 00:37:42.729 Boot Partition: Not Supported 00:37:42.729 Memory Page Size Minimum: 4096 bytes 00:37:42.729 Memory Page Size Maximum: 4096 bytes 00:37:42.729 Persistent Memory Region: Not Supported 00:37:42.730 Optional Asynchronous Events Supported 00:37:42.730 Namespace Attribute Notices: Supported 00:37:42.730 Firmware Activation Notices: Not Supported 00:37:42.730 ANA Change Notices: Supported 00:37:42.730 PLE Aggregate Log Change Notices: Not Supported 00:37:42.730 LBA Status Info Alert Notices: Not Supported 00:37:42.730 EGE Aggregate Log Change Notices: Not Supported 00:37:42.730 Normal NVM Subsystem Shutdown event: Not Supported 00:37:42.730 Zone Descriptor Change Notices: Not Supported 00:37:42.730 Discovery Log Change Notices: Not Supported 00:37:42.730 Controller Attributes 00:37:42.730 128-bit Host Identifier: Supported 00:37:42.730 Non-Operational Permissive Mode: Not Supported 00:37:42.730 NVM Sets: Not Supported 00:37:42.730 Read Recovery Levels: Not Supported 00:37:42.730 Endurance Groups: Not Supported 00:37:42.730 Predictable Latency Mode: Not Supported 00:37:42.730 Traffic Based Keep ALive: Supported 00:37:42.730 Namespace Granularity: Not Supported 00:37:42.730 SQ Associations: Not Supported 00:37:42.730 UUID List: Not Supported 00:37:42.730 Multi-Domain Subsystem: Not Supported 00:37:42.730 Fixed Capacity Management: Not Supported 00:37:42.730 Variable Capacity Management: Not Supported 00:37:42.730 Delete Endurance Group: Not Supported 00:37:42.730 Delete NVM Set: Not Supported 00:37:42.730 Extended LBA Formats Supported: Not Supported 00:37:42.730 Flexible Data Placement Supported: Not Supported 00:37:42.730 00:37:42.730 Controller Memory Buffer Support 00:37:42.730 ================================ 00:37:42.730 Supported: No 00:37:42.730 00:37:42.730 Persistent Memory Region Support 00:37:42.730 ================================ 00:37:42.730 Supported: No 00:37:42.730 00:37:42.730 Admin Command Set Attributes 00:37:42.730 ============================ 00:37:42.730 Security Send/Receive: Not Supported 00:37:42.730 Format NVM: Not Supported 00:37:42.730 Firmware Activate/Download: Not Supported 00:37:42.730 Namespace Management: Not Supported 00:37:42.730 Device Self-Test: Not Supported 00:37:42.730 Directives: Not Supported 00:37:42.730 NVMe-MI: Not Supported 00:37:42.730 Virtualization Management: Not Supported 00:37:42.730 Doorbell Buffer Config: Not Supported 00:37:42.730 Get LBA Status Capability: Not Supported 00:37:42.730 Command & Feature Lockdown Capability: Not Supported 00:37:42.730 Abort Command Limit: 4 00:37:42.730 Async Event Request Limit: 4 00:37:42.730 Number of Firmware Slots: N/A 00:37:42.730 Firmware Slot 1 Read-Only: N/A 00:37:42.730 Firmware Activation Without Reset: N/A 00:37:42.730 Multiple Update Detection Support: N/A 00:37:42.730 Firmware Update Granularity: No Information Provided 00:37:42.730 Per-Namespace SMART Log: Yes 00:37:42.730 Asymmetric Namespace Access Log Page: Supported 00:37:42.730 ANA Transition Time : 10 sec 00:37:42.730 00:37:42.730 Asymmetric Namespace Access Capabilities 00:37:42.730 ANA Optimized State : Supported 00:37:42.730 ANA Non-Optimized State : Supported 00:37:42.730 ANA Inaccessible State : Supported 00:37:42.730 ANA Persistent Loss State : Supported 00:37:42.730 ANA Change State : Supported 00:37:42.730 ANAGRPID is not changed : No 00:37:42.730 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:37:42.730 00:37:42.730 ANA Group Identifier Maximum : 128 00:37:42.730 Number of ANA Group Identifiers : 128 00:37:42.730 Max Number of Allowed Namespaces : 1024 00:37:42.730 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:37:42.730 Command Effects Log Page: Supported 00:37:42.730 Get Log Page Extended Data: Supported 00:37:42.730 Telemetry Log Pages: Not Supported 00:37:42.730 Persistent Event Log Pages: Not Supported 00:37:42.730 Supported Log Pages Log Page: May Support 00:37:42.730 Commands Supported & Effects Log Page: Not Supported 00:37:42.730 Feature Identifiers & Effects Log Page:May Support 00:37:42.730 NVMe-MI Commands & Effects Log Page: May Support 00:37:42.730 Data Area 4 for Telemetry Log: Not Supported 00:37:42.730 Error Log Page Entries Supported: 128 00:37:42.730 Keep Alive: Supported 00:37:42.730 Keep Alive Granularity: 1000 ms 00:37:42.730 00:37:42.730 NVM Command Set Attributes 00:37:42.730 ========================== 00:37:42.730 Submission Queue Entry Size 00:37:42.730 Max: 64 00:37:42.730 Min: 64 00:37:42.730 Completion Queue Entry Size 00:37:42.730 Max: 16 00:37:42.730 Min: 16 00:37:42.730 Number of Namespaces: 1024 00:37:42.730 Compare Command: Not Supported 00:37:42.730 Write Uncorrectable Command: Not Supported 00:37:42.730 Dataset Management Command: Supported 00:37:42.730 Write Zeroes Command: Supported 00:37:42.730 Set Features Save Field: Not Supported 00:37:42.730 Reservations: Not Supported 00:37:42.730 Timestamp: Not Supported 00:37:42.730 Copy: Not Supported 00:37:42.730 Volatile Write Cache: Present 00:37:42.730 Atomic Write Unit (Normal): 1 00:37:42.730 Atomic Write Unit (PFail): 1 00:37:42.730 Atomic Compare & Write Unit: 1 00:37:42.730 Fused Compare & Write: Not Supported 00:37:42.730 Scatter-Gather List 00:37:42.730 SGL Command Set: Supported 00:37:42.730 SGL Keyed: Not Supported 00:37:42.730 SGL Bit Bucket Descriptor: Not Supported 00:37:42.730 SGL Metadata Pointer: Not Supported 00:37:42.730 Oversized SGL: Not Supported 00:37:42.730 SGL Metadata Address: Not Supported 00:37:42.730 SGL Offset: Supported 00:37:42.730 Transport SGL Data Block: Not Supported 00:37:42.730 Replay Protected Memory Block: Not Supported 00:37:42.730 00:37:42.730 Firmware Slot Information 00:37:42.730 ========================= 00:37:42.730 Active slot: 0 00:37:42.730 00:37:42.730 Asymmetric Namespace Access 00:37:42.730 =========================== 00:37:42.730 Change Count : 0 00:37:42.730 Number of ANA Group Descriptors : 1 00:37:42.730 ANA Group Descriptor : 0 00:37:42.730 ANA Group ID : 1 00:37:42.730 Number of NSID Values : 1 00:37:42.730 Change Count : 0 00:37:42.730 ANA State : 1 00:37:42.730 Namespace Identifier : 1 00:37:42.730 00:37:42.730 Commands Supported and Effects 00:37:42.730 ============================== 00:37:42.730 Admin Commands 00:37:42.730 -------------- 00:37:42.730 Get Log Page (02h): Supported 00:37:42.730 Identify (06h): Supported 00:37:42.730 Abort (08h): Supported 00:37:42.730 Set Features (09h): Supported 00:37:42.730 Get Features (0Ah): Supported 00:37:42.730 Asynchronous Event Request (0Ch): Supported 00:37:42.730 Keep Alive (18h): Supported 00:37:42.730 I/O Commands 00:37:42.730 ------------ 00:37:42.730 Flush (00h): Supported 00:37:42.730 Write (01h): Supported LBA-Change 00:37:42.730 Read (02h): Supported 00:37:42.730 Write Zeroes (08h): Supported LBA-Change 00:37:42.730 Dataset Management (09h): Supported 00:37:42.730 00:37:42.730 Error Log 00:37:42.730 ========= 00:37:42.730 Entry: 0 00:37:42.730 Error Count: 0x3 00:37:42.730 Submission Queue Id: 0x0 00:37:42.730 Command Id: 0x5 00:37:42.730 Phase Bit: 0 00:37:42.730 Status Code: 0x2 00:37:42.731 Status Code Type: 0x0 00:37:42.731 Do Not Retry: 1 00:37:42.731 Error Location: 0x28 00:37:42.731 LBA: 0x0 00:37:42.731 Namespace: 0x0 00:37:42.731 Vendor Log Page: 0x0 00:37:42.731 ----------- 00:37:42.731 Entry: 1 00:37:42.731 Error Count: 0x2 00:37:42.731 Submission Queue Id: 0x0 00:37:42.731 Command Id: 0x5 00:37:42.731 Phase Bit: 0 00:37:42.731 Status Code: 0x2 00:37:42.731 Status Code Type: 0x0 00:37:42.731 Do Not Retry: 1 00:37:42.731 Error Location: 0x28 00:37:42.731 LBA: 0x0 00:37:42.731 Namespace: 0x0 00:37:42.731 Vendor Log Page: 0x0 00:37:42.731 ----------- 00:37:42.731 Entry: 2 00:37:42.731 Error Count: 0x1 00:37:42.731 Submission Queue Id: 0x0 00:37:42.731 Command Id: 0x4 00:37:42.731 Phase Bit: 0 00:37:42.731 Status Code: 0x2 00:37:42.731 Status Code Type: 0x0 00:37:42.731 Do Not Retry: 1 00:37:42.731 Error Location: 0x28 00:37:42.731 LBA: 0x0 00:37:42.731 Namespace: 0x0 00:37:42.731 Vendor Log Page: 0x0 00:37:42.731 00:37:42.731 Number of Queues 00:37:42.731 ================ 00:37:42.731 Number of I/O Submission Queues: 128 00:37:42.731 Number of I/O Completion Queues: 128 00:37:42.731 00:37:42.731 ZNS Specific Controller Data 00:37:42.731 ============================ 00:37:42.731 Zone Append Size Limit: 0 00:37:42.731 00:37:42.731 00:37:42.731 Active Namespaces 00:37:42.731 ================= 00:37:42.731 get_feature(0x05) failed 00:37:42.731 Namespace ID:1 00:37:42.731 Command Set Identifier: NVM (00h) 00:37:42.731 Deallocate: Supported 00:37:42.731 Deallocated/Unwritten Error: Not Supported 00:37:42.731 Deallocated Read Value: Unknown 00:37:42.731 Deallocate in Write Zeroes: Not Supported 00:37:42.731 Deallocated Guard Field: 0xFFFF 00:37:42.731 Flush: Supported 00:37:42.731 Reservation: Not Supported 00:37:42.731 Namespace Sharing Capabilities: Multiple Controllers 00:37:42.731 Size (in LBAs): 3750748848 (1788GiB) 00:37:42.731 Capacity (in LBAs): 3750748848 (1788GiB) 00:37:42.731 Utilization (in LBAs): 3750748848 (1788GiB) 00:37:42.731 UUID: c38894be-e91c-4ff9-91b3-b8608e737d3f 00:37:42.731 Thin Provisioning: Not Supported 00:37:42.731 Per-NS Atomic Units: Yes 00:37:42.731 Atomic Write Unit (Normal): 8 00:37:42.731 Atomic Write Unit (PFail): 8 00:37:42.731 Preferred Write Granularity: 8 00:37:42.731 Atomic Compare & Write Unit: 8 00:37:42.731 Atomic Boundary Size (Normal): 0 00:37:42.731 Atomic Boundary Size (PFail): 0 00:37:42.731 Atomic Boundary Offset: 0 00:37:42.731 NGUID/EUI64 Never Reused: No 00:37:42.731 ANA group ID: 1 00:37:42.731 Namespace Write Protected: No 00:37:42.731 Number of LBA Formats: 1 00:37:42.731 Current LBA Format: LBA Format #00 00:37:42.731 LBA Format #00: Data Size: 512 Metadata Size: 0 00:37:42.731 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:42.731 rmmod nvme_tcp 00:37:42.731 rmmod nvme_fabrics 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:42.731 18:04:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:45.276 18:04:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:45.276 18:04:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:37:45.276 18:04:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:45.276 18:04:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:37:45.276 18:04:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:45.276 18:04:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:45.276 18:04:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:45.276 18:04:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:45.276 18:04:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:37:45.276 18:04:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:37:45.276 18:04:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:48.578 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:48.578 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:48.578 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:48.578 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:48.578 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:48.578 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:48.578 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:48.578 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:48.578 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:48.578 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:48.578 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:48.578 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:48.578 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:48.578 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:48.578 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:48.578 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:48.578 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:48.839 00:37:48.839 real 0m19.662s 00:37:48.839 user 0m5.320s 00:37:48.839 sys 0m11.341s 00:37:48.839 18:04:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:48.839 18:04:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:37:48.839 ************************************ 00:37:48.839 END TEST nvmf_identify_kernel_target 00:37:48.839 ************************************ 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.101 ************************************ 00:37:49.101 START TEST nvmf_auth_host 00:37:49.101 ************************************ 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:37:49.101 * Looking for test storage... 00:37:49.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:49.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.101 --rc genhtml_branch_coverage=1 00:37:49.101 --rc genhtml_function_coverage=1 00:37:49.101 --rc genhtml_legend=1 00:37:49.101 --rc geninfo_all_blocks=1 00:37:49.101 --rc geninfo_unexecuted_blocks=1 00:37:49.101 00:37:49.101 ' 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:49.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.101 --rc genhtml_branch_coverage=1 00:37:49.101 --rc genhtml_function_coverage=1 00:37:49.101 --rc genhtml_legend=1 00:37:49.101 --rc geninfo_all_blocks=1 00:37:49.101 --rc geninfo_unexecuted_blocks=1 00:37:49.101 00:37:49.101 ' 00:37:49.101 18:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:49.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.101 --rc genhtml_branch_coverage=1 00:37:49.101 --rc genhtml_function_coverage=1 00:37:49.101 --rc genhtml_legend=1 00:37:49.101 --rc geninfo_all_blocks=1 00:37:49.101 --rc geninfo_unexecuted_blocks=1 00:37:49.101 00:37:49.101 ' 00:37:49.101 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:49.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.101 --rc genhtml_branch_coverage=1 00:37:49.101 --rc genhtml_function_coverage=1 00:37:49.101 --rc genhtml_legend=1 00:37:49.101 --rc geninfo_all_blocks=1 00:37:49.101 --rc geninfo_unexecuted_blocks=1 00:37:49.101 00:37:49.101 ' 00:37:49.101 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:49.101 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:37:49.101 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:49.101 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:49.101 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:49.101 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:49.101 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:49.101 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:49.101 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:49.101 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:49.101 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:49.101 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:49.362 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:49.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:37:49.363 18:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:57.496 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:57.496 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:57.496 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:57.497 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:57.497 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # is_hw=yes 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:57.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:57.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:37:57.497 00:37:57.497 --- 10.0.0.2 ping statistics --- 00:37:57.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:57.497 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:57.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:57.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:37:57.497 00:37:57.497 --- 10.0.0.1 ping statistics --- 00:37:57.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:57.497 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # return 0 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=2909072 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 2909072 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2909072 ']' 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:57.497 18:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=6a6d3731e8671b9c1fd1c06c579b6bdd 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.fj3 00:37:57.757 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 6a6d3731e8671b9c1fd1c06c579b6bdd 0 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 6a6d3731e8671b9c1fd1c06c579b6bdd 0 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=6a6d3731e8671b9c1fd1c06c579b6bdd 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.fj3 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.fj3 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.fj3 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=a030093e52ccfedcc24aee5ed9f4c3bbc2601c3e78215a477089705d4cea6cb6 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.qQD 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key a030093e52ccfedcc24aee5ed9f4c3bbc2601c3e78215a477089705d4cea6cb6 3 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 a030093e52ccfedcc24aee5ed9f4c3bbc2601c3e78215a477089705d4cea6cb6 3 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=a030093e52ccfedcc24aee5ed9f4c3bbc2601c3e78215a477089705d4cea6cb6 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:37:57.758 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:37:58.018 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.qQD 00:37:58.018 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.qQD 00:37:58.018 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.qQD 00:37:58.018 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:37:58.018 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:37:58.018 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:58.018 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:37:58.018 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:37:58.018 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:37:58.018 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:58.018 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=11b22d9400597b04303b88fef84ba3194691519b5bf7aea1 00:37:58.018 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:37:58.018 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.yid 00:37:58.018 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 11b22d9400597b04303b88fef84ba3194691519b5bf7aea1 0 00:37:58.018 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 11b22d9400597b04303b88fef84ba3194691519b5bf7aea1 0 00:37:58.018 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:37:58.018 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=11b22d9400597b04303b88fef84ba3194691519b5bf7aea1 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.yid 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.yid 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.yid 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=7edb1c64db8e9eb6ec6ee8c64e07cbe1d880f1d2f35483ff 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.Bgx 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 7edb1c64db8e9eb6ec6ee8c64e07cbe1d880f1d2f35483ff 2 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 7edb1c64db8e9eb6ec6ee8c64e07cbe1d880f1d2f35483ff 2 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=7edb1c64db8e9eb6ec6ee8c64e07cbe1d880f1d2f35483ff 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.Bgx 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.Bgx 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Bgx 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=2058913e63d5685eb550b20b96842f9a 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.Unz 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 2058913e63d5685eb550b20b96842f9a 1 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 2058913e63d5685eb550b20b96842f9a 1 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=2058913e63d5685eb550b20b96842f9a 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.Unz 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.Unz 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Unz 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=905e7d20e158cbc997abba2b285b342d 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.Yms 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 905e7d20e158cbc997abba2b285b342d 1 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 905e7d20e158cbc997abba2b285b342d 1 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=905e7d20e158cbc997abba2b285b342d 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:37:58.019 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.Yms 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.Yms 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Yms 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=10ba8491641897664794018985510d385c474760fb032117 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.yw9 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 10ba8491641897664794018985510d385c474760fb032117 2 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 10ba8491641897664794018985510d385c474760fb032117 2 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=10ba8491641897664794018985510d385c474760fb032117 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:37:58.279 18:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:37:58.279 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.yw9 00:37:58.279 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.yw9 00:37:58.279 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.yw9 00:37:58.279 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:37:58.279 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:37:58.279 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:58.279 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:37:58.279 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:37:58.279 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:37:58.279 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:58.279 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=106214d252805d64c853baa051c20dbc 00:37:58.279 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:37:58.279 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.hCe 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 106214d252805d64c853baa051c20dbc 0 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 106214d252805d64c853baa051c20dbc 0 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=106214d252805d64c853baa051c20dbc 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.hCe 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.hCe 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.hCe 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=55c431bbff13f99e53f6ea858dec9308dba178f2f5dcea72eb0fcfd5a4398309 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.uCV 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 55c431bbff13f99e53f6ea858dec9308dba178f2f5dcea72eb0fcfd5a4398309 3 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 55c431bbff13f99e53f6ea858dec9308dba178f2f5dcea72eb0fcfd5a4398309 3 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=55c431bbff13f99e53f6ea858dec9308dba178f2f5dcea72eb0fcfd5a4398309 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.uCV 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.uCV 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.uCV 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2909072 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2909072 ']' 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:58.280 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fj3 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.qQD ]] 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qQD 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.yid 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Bgx ]] 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bgx 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Unz 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Yms ]] 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Yms 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.yw9 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.hCe ]] 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.hCe 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.540 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.800 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.800 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:58.800 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.uCV 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:58.801 18:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:02.094 Waiting for block devices as requested 00:38:02.094 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:02.354 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:02.354 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:02.354 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:02.354 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:02.615 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:02.615 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:02.615 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:02.876 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:02.876 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:03.135 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:03.135 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:03.135 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:03.135 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:03.395 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:03.395 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:03.395 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:04.334 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:38:04.334 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:04.334 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:38:04.334 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:38:04.334 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:04.334 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:04.334 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:38:04.334 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:04.334 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:04.334 No valid GPT data, bailing 00:38:04.334 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:04.334 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:38:04.334 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:38:04.334 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:38:04.334 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:38:04.334 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:04.334 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:38:04.614 00:38:04.614 Discovery Log Number of Records 2, Generation counter 2 00:38:04.614 =====Discovery Log Entry 0====== 00:38:04.614 trtype: tcp 00:38:04.614 adrfam: ipv4 00:38:04.614 subtype: current discovery subsystem 00:38:04.614 treq: not specified, sq flow control disable supported 00:38:04.614 portid: 1 00:38:04.614 trsvcid: 4420 00:38:04.614 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:04.614 traddr: 10.0.0.1 00:38:04.614 eflags: none 00:38:04.614 sectype: none 00:38:04.614 =====Discovery Log Entry 1====== 00:38:04.614 trtype: tcp 00:38:04.614 adrfam: ipv4 00:38:04.614 subtype: nvme subsystem 00:38:04.614 treq: not specified, sq flow control disable supported 00:38:04.614 portid: 1 00:38:04.614 trsvcid: 4420 00:38:04.614 subnqn: nqn.2024-02.io.spdk:cnode0 00:38:04.614 traddr: 10.0.0.1 00:38:04.614 eflags: none 00:38:04.614 sectype: none 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.614 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.891 nvme0n1 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: ]] 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.891 nvme0n1 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.891 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:05.179 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:05.180 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:05.180 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:05.180 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:05.180 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:05.180 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:05.180 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:05.180 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:05.180 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.180 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.180 nvme0n1 00:38:05.180 18:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: ]] 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.180 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.440 nvme0n1 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: ]] 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.440 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.701 nvme0n1 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.701 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.962 nvme0n1 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: ]] 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.962 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.223 nvme0n1 00:38:06.223 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.223 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:06.223 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:06.223 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.223 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.223 18:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.223 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:06.223 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:06.223 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.224 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.484 nvme0n1 00:38:06.484 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.484 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:06.484 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:06.484 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.484 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.484 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.484 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:06.484 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:06.484 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.484 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.484 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.484 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: ]] 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.485 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.745 nvme0n1 00:38:06.745 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.745 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:06.745 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.745 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:06.745 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.745 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.745 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: ]] 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.746 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.007 nvme0n1 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.007 18:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.268 nvme0n1 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: ]] 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.268 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.529 nvme0n1 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.529 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.789 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.789 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:07.789 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:07.790 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:07.790 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:07.790 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:07.790 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:07.790 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:07.790 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:07.790 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:07.790 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:07.790 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:07.790 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:07.790 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.790 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.790 nvme0n1 00:38:07.790 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: ]] 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.050 18:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.310 nvme0n1 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: ]] 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.310 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.570 nvme0n1 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.570 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.829 nvme0n1 00:38:08.829 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.829 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:08.829 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:08.829 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.829 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.829 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: ]] 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.088 18:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.348 nvme0n1 00:38:09.348 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.348 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:09.348 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:09.348 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.348 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.348 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.607 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.608 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.867 nvme0n1 00:38:09.867 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.867 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:09.867 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:09.867 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.867 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.867 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.867 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:09.867 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:09.867 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.867 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: ]] 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.128 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:10.129 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:10.129 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:10.129 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:10.129 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:10.129 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:10.129 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:10.129 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:10.129 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:10.129 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:10.129 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:10.129 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:10.129 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.129 18:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.388 nvme0n1 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: ]] 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:10.388 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:10.389 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.389 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.389 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.389 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:10.389 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:10.389 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:10.389 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:10.389 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:10.389 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:10.389 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:10.648 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:10.648 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:10.648 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:10.648 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:10.648 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:10.648 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.648 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.908 nvme0n1 00:38:10.908 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.908 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:10.908 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.908 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:10.908 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.908 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.908 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:10.908 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:10.908 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.908 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.908 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.909 18:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.479 nvme0n1 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: ]] 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:11.479 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:11.480 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:11.480 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.480 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:11.480 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:11.480 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:11.480 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:11.480 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:11.480 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:11.480 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:11.480 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:11.480 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:11.480 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:11.480 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:11.480 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:11.480 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:11.480 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:11.480 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.050 nvme0n1 00:38:12.050 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.050 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:12.050 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:12.050 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.050 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.050 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.310 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:12.310 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:12.310 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.310 18:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.310 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.881 nvme0n1 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: ]] 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.881 18:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.823 nvme0n1 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: ]] 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.823 18:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.394 nvme0n1 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.394 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.964 nvme0n1 00:38:14.964 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.964 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:14.964 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: ]] 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.965 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.225 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.225 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:15.225 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:15.225 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:15.225 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:15.225 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:15.225 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:15.225 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:15.225 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:15.225 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:15.225 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:15.225 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:15.225 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:15.225 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.225 18:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.225 nvme0n1 00:38:15.225 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.225 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:15.225 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:15.225 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.225 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.225 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.225 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:15.225 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:15.225 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.225 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.225 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.225 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.226 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.487 nvme0n1 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: ]] 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.487 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.747 nvme0n1 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: ]] 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.748 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.008 nvme0n1 00:38:16.008 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.008 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:16.008 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:16.008 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.008 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.008 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.008 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:16.008 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:16.008 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.008 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.008 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.009 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.269 nvme0n1 00:38:16.269 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.269 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:16.269 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:16.269 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.269 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.269 18:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.269 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:16.269 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:16.269 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.269 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.269 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.269 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:16.269 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:16.269 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:38:16.269 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:16.269 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:16.269 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:16.269 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:16.269 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:16.269 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: ]] 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.270 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.530 nvme0n1 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:16.530 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.531 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.792 nvme0n1 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: ]] 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.792 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.052 nvme0n1 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: ]] 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.053 18:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.314 nvme0n1 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.314 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.575 nvme0n1 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: ]] 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.575 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.835 nvme0n1 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:17.835 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.836 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.095 nvme0n1 00:38:18.095 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.095 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:18.095 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.095 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:18.095 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.095 18:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.355 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:18.355 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:18.355 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.355 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.355 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.355 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:18.355 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:38:18.355 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:18.355 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:18.355 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:18.355 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: ]] 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.356 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.618 nvme0n1 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: ]] 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.618 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.880 nvme0n1 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.880 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.140 nvme0n1 00:38:19.140 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.140 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:19.140 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:19.140 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.140 18:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.140 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.140 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:19.140 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:19.140 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.140 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: ]] 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:19.403 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:19.404 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:19.404 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:19.404 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:19.404 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:19.404 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:19.404 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:19.404 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:19.404 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.404 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.673 nvme0n1 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.673 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.245 nvme0n1 00:38:20.245 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.245 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:20.245 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:20.245 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.245 18:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: ]] 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:20.245 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:20.246 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:20.246 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:20.246 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:20.246 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.246 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.817 nvme0n1 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:20.817 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: ]] 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.818 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.078 nvme0n1 00:38:21.078 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.078 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:21.078 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:21.078 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.078 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.338 18:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.338 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.597 nvme0n1 00:38:21.597 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.597 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:21.597 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:21.597 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.597 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.597 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.857 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:21.857 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:21.857 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.857 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.857 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.857 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:21.857 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:21.857 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:38:21.857 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:21.857 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: ]] 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.858 18:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.427 nvme0n1 00:38:22.427 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:22.427 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:22.428 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.997 nvme0n1 00:38:22.997 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:22.997 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:22.997 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:22.997 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:22.997 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.997 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: ]] 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:23.257 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:23.258 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.258 18:05:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.827 nvme0n1 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: ]] 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.827 18:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.767 nvme0n1 00:38:24.767 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.767 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:24.767 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:24.767 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.767 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.767 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.767 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:24.767 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:24.767 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.767 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.767 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.767 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:24.767 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:38:24.767 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:24.767 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:24.767 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:24.767 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.768 18:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.337 nvme0n1 00:38:25.337 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.337 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:25.337 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:25.337 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.337 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.337 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.337 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: ]] 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.338 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.598 nvme0n1 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.598 nvme0n1 00:38:25.598 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: ]] 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.858 nvme0n1 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.858 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: ]] 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:38:26.118 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.119 nvme0n1 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.119 18:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.119 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:26.119 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:26.119 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.119 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.379 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.379 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:26.379 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:38:26.379 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:26.379 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:26.379 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:26.379 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:26.379 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:26.379 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:26.379 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:26.379 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:26.379 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:26.379 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.380 nvme0n1 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: ]] 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.380 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.643 nvme0n1 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.643 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.902 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.903 nvme0n1 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: ]] 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.903 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.163 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.163 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:27.163 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:27.163 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:27.163 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:27.163 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:27.163 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:27.163 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:27.163 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:27.163 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:27.163 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:27.163 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:27.163 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:27.163 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.163 18:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.163 nvme0n1 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: ]] 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:27.163 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.423 nvme0n1 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.423 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:27.683 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.684 nvme0n1 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.684 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: ]] 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.943 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.202 nvme0n1 00:38:28.202 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.202 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:28.202 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:28.202 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.202 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.202 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.202 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:28.202 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.203 18:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.462 nvme0n1 00:38:28.462 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.462 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: ]] 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.463 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.722 nvme0n1 00:38:28.722 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.722 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:28.722 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: ]] 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.723 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.982 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.982 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:28.982 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:28.982 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:28.982 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:28.982 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:28.982 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:28.982 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:28.983 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:28.983 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:28.983 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:28.983 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:28.983 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:28.983 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.983 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.242 nvme0n1 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:29.242 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.243 18:05:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.508 nvme0n1 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: ]] 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.508 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.082 nvme0n1 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:30.082 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.083 18:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.342 nvme0n1 00:38:30.342 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.342 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:30.342 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:30.342 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.342 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: ]] 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.603 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.863 nvme0n1 00:38:30.863 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.863 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:30.863 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:30.863 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.863 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.863 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: ]] 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.123 18:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:31.384 nvme0n1 00:38:31.384 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.384 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:31.384 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:31.384 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.384 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:31.384 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.384 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:31.385 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:31.385 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.385 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:31.385 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.385 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:31.385 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:38:31.385 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:31.385 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:31.385 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:31.385 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:31.385 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:31.385 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:31.385 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:31.385 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:31.385 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.645 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:31.906 nvme0n1 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE2ZDM3MzFlODY3MWI5YzFmZDFjMDZjNTc5YjZiZGSfJ5VX: 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: ]] 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTAzMDA5M2U1MmNjZmVkY2MyNGFlZTVlZDlmNGMzYmJjMjYwMWMzZTc4MjE1YTQ3NzA4OTcwNWQ0Y2VhNmNiNjn+rCs=: 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.906 18:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:32.847 nvme0n1 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.847 18:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.417 nvme0n1 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:33.417 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: ]] 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.418 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.988 nvme0n1 00:38:33.988 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.988 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:33.988 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:33.988 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.988 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.988 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBiYTg0OTE2NDE4OTc2NjQ3OTQwMTg5ODU1MTBkMzg1YzQ3NDc2MGZiMDMyMTE3g6eiHg==: 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: ]] 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTA2MjE0ZDI1MjgwNWQ2NGM4NTNiYWEwNTFjMjBkYmMo5gHQ: 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.250 18:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.824 nvme0n1 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVjNDMxYmJmZjEzZjk5ZTUzZjZlYTg1OGRlYzkzMDhkYmExNzhmMmY1ZGNlYTcyZWIwZmNmZDVhNDM5ODMwOYiWuqM=: 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:34.824 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.825 18:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.764 nvme0n1 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.764 request: 00:38:35.764 { 00:38:35.764 "name": "nvme0", 00:38:35.764 "trtype": "tcp", 00:38:35.764 "traddr": "10.0.0.1", 00:38:35.764 "adrfam": "ipv4", 00:38:35.764 "trsvcid": "4420", 00:38:35.764 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:38:35.764 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:38:35.764 "prchk_reftag": false, 00:38:35.764 "prchk_guard": false, 00:38:35.764 "hdgst": false, 00:38:35.764 "ddgst": false, 00:38:35.764 "allow_unrecognized_csi": false, 00:38:35.764 "method": "bdev_nvme_attach_controller", 00:38:35.764 "req_id": 1 00:38:35.764 } 00:38:35.764 Got JSON-RPC error response 00:38:35.764 response: 00:38:35.764 { 00:38:35.764 "code": -5, 00:38:35.764 "message": "Input/output error" 00:38:35.764 } 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:35.764 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.765 request: 00:38:35.765 { 00:38:35.765 "name": "nvme0", 00:38:35.765 "trtype": "tcp", 00:38:35.765 "traddr": "10.0.0.1", 00:38:35.765 "adrfam": "ipv4", 00:38:35.765 "trsvcid": "4420", 00:38:35.765 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:38:35.765 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:38:35.765 "prchk_reftag": false, 00:38:35.765 "prchk_guard": false, 00:38:35.765 "hdgst": false, 00:38:35.765 "ddgst": false, 00:38:35.765 "dhchap_key": "key2", 00:38:35.765 "allow_unrecognized_csi": false, 00:38:35.765 "method": "bdev_nvme_attach_controller", 00:38:35.765 "req_id": 1 00:38:35.765 } 00:38:35.765 Got JSON-RPC error response 00:38:35.765 response: 00:38:35.765 { 00:38:35.765 "code": -5, 00:38:35.765 "message": "Input/output error" 00:38:35.765 } 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:35.765 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.025 request: 00:38:36.025 { 00:38:36.025 "name": "nvme0", 00:38:36.025 "trtype": "tcp", 00:38:36.025 "traddr": "10.0.0.1", 00:38:36.025 "adrfam": "ipv4", 00:38:36.025 "trsvcid": "4420", 00:38:36.025 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:38:36.025 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:38:36.025 "prchk_reftag": false, 00:38:36.025 "prchk_guard": false, 00:38:36.025 "hdgst": false, 00:38:36.025 "ddgst": false, 00:38:36.025 "dhchap_key": "key1", 00:38:36.025 "dhchap_ctrlr_key": "ckey2", 00:38:36.025 "allow_unrecognized_csi": false, 00:38:36.025 "method": "bdev_nvme_attach_controller", 00:38:36.025 "req_id": 1 00:38:36.025 } 00:38:36.025 Got JSON-RPC error response 00:38:36.025 response: 00:38:36.025 { 00:38:36.025 "code": -5, 00:38:36.025 "message": "Input/output error" 00:38:36.025 } 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.025 nvme0n1 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: ]] 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.025 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.285 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.285 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:36.285 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:36.285 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:38:36.285 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:36.285 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:36.285 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:36.285 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:36.285 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:36.285 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:36.285 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.285 18:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.285 request: 00:38:36.286 { 00:38:36.286 "name": "nvme0", 00:38:36.286 "dhchap_key": "key1", 00:38:36.286 "dhchap_ctrlr_key": "ckey2", 00:38:36.286 "method": "bdev_nvme_set_keys", 00:38:36.286 "req_id": 1 00:38:36.286 } 00:38:36.286 Got JSON-RPC error response 00:38:36.286 response: 00:38:36.286 { 00:38:36.286 "code": -13, 00:38:36.286 "message": "Permission denied" 00:38:36.286 } 00:38:36.286 18:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:36.286 18:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:38:36.286 18:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:36.286 18:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:36.286 18:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:36.286 18:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:38:36.286 18:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:38:36.286 18:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.286 18:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.286 18:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.286 18:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:38:36.286 18:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:38:37.226 18:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:38:37.226 18:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:38:37.226 18:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.226 18:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:37.226 18:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.487 18:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:38:37.487 18:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTFiMjJkOTQwMDU5N2IwNDMwM2I4OGZlZjg0YmEzMTk0NjkxNTE5YjViZjdhZWExdaQX7w==: 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: ]] 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2VkYjFjNjRkYjhlOWViNmVjNmVlOGM2NGUwN2NiZTFkODgwZjFkMmYzNTQ4M2ZmWxcj3Q==: 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.427 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:38.688 nvme0n1 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjA1ODkxM2U2M2Q1Njg1ZWI1NTBiMjBiOTY4NDJmOWFANGgz: 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: ]] 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTA1ZTdkMjBlMTU4Y2JjOTk3YWJiYTJiMjg1YjM0MmSX/SCw: 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:38.688 request: 00:38:38.688 { 00:38:38.688 "name": "nvme0", 00:38:38.688 "dhchap_key": "key2", 00:38:38.688 "dhchap_ctrlr_key": "ckey1", 00:38:38.688 "method": "bdev_nvme_set_keys", 00:38:38.688 "req_id": 1 00:38:38.688 } 00:38:38.688 Got JSON-RPC error response 00:38:38.688 response: 00:38:38.688 { 00:38:38.688 "code": -13, 00:38:38.688 "message": "Permission denied" 00:38:38.688 } 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:38:38.688 18:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:38:39.627 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:38:39.627 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:38:39.628 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:39.628 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:39.628 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:39.628 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:38:39.628 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:38:39.628 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:38:39.628 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:38:39.628 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:39.628 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:38:39.628 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:39.628 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:38:39.628 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:39.628 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:39.628 rmmod nvme_tcp 00:38:39.888 rmmod nvme_fabrics 00:38:39.888 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:39.888 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:38:39.888 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:38:39.888 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 2909072 ']' 00:38:39.888 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 2909072 00:38:39.888 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2909072 ']' 00:38:39.888 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2909072 00:38:39.888 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:38:39.888 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:39.888 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2909072 00:38:39.888 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:39.888 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:39.888 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2909072' 00:38:39.888 killing process with pid 2909072 00:38:39.889 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2909072 00:38:39.889 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2909072 00:38:39.889 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:39.889 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:39.889 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:39.889 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:38:39.889 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:38:39.889 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:39.889 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:38:39.889 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:39.889 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:39.889 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:39.889 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:39.889 18:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:42.433 18:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:42.433 18:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:38:42.433 18:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:38:42.433 18:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:38:42.433 18:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:38:42.433 18:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:38:42.433 18:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:42.433 18:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:38:42.433 18:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:42.433 18:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:42.433 18:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:38:42.433 18:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:38:42.433 18:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:45.850 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:45.850 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:45.850 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:45.850 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:45.850 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:45.850 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:45.850 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:45.850 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:45.850 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:45.850 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:45.850 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:45.850 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:45.850 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:45.850 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:45.850 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:45.850 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:45.850 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:46.111 18:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.fj3 /tmp/spdk.key-null.yid /tmp/spdk.key-sha256.Unz /tmp/spdk.key-sha384.yw9 /tmp/spdk.key-sha512.uCV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:38:46.111 18:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:49.408 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:49.408 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:49.408 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:49.408 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:49.408 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:49.408 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:49.408 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:49.408 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:49.408 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:49.408 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:38:49.408 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:49.408 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:49.408 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:49.408 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:49.408 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:49.408 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:49.408 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:49.978 00:38:49.978 real 1m0.886s 00:38:49.978 user 0m54.843s 00:38:49.978 sys 0m16.025s 00:38:49.978 18:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:49.978 18:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.978 ************************************ 00:38:49.978 END TEST nvmf_auth_host 00:38:49.978 ************************************ 00:38:49.978 18:05:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:38:49.979 18:05:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:38:49.979 18:05:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:49.979 18:05:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:49.979 18:05:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.979 ************************************ 00:38:49.979 START TEST nvmf_digest 00:38:49.979 ************************************ 00:38:49.979 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:38:49.979 * Looking for test storage... 00:38:49.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:49.979 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:49.979 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:49.979 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:50.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.241 --rc genhtml_branch_coverage=1 00:38:50.241 --rc genhtml_function_coverage=1 00:38:50.241 --rc genhtml_legend=1 00:38:50.241 --rc geninfo_all_blocks=1 00:38:50.241 --rc geninfo_unexecuted_blocks=1 00:38:50.241 00:38:50.241 ' 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:50.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.241 --rc genhtml_branch_coverage=1 00:38:50.241 --rc genhtml_function_coverage=1 00:38:50.241 --rc genhtml_legend=1 00:38:50.241 --rc geninfo_all_blocks=1 00:38:50.241 --rc geninfo_unexecuted_blocks=1 00:38:50.241 00:38:50.241 ' 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:50.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.241 --rc genhtml_branch_coverage=1 00:38:50.241 --rc genhtml_function_coverage=1 00:38:50.241 --rc genhtml_legend=1 00:38:50.241 --rc geninfo_all_blocks=1 00:38:50.241 --rc geninfo_unexecuted_blocks=1 00:38:50.241 00:38:50.241 ' 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:50.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.241 --rc genhtml_branch_coverage=1 00:38:50.241 --rc genhtml_function_coverage=1 00:38:50.241 --rc genhtml_legend=1 00:38:50.241 --rc geninfo_all_blocks=1 00:38:50.241 --rc geninfo_unexecuted_blocks=1 00:38:50.241 00:38:50.241 ' 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:50.241 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:50.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:38:50.242 18:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:58.380 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:58.380 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:58.380 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:58.380 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:58.381 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # is_hw=yes 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:58.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:58.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:38:58.381 00:38:58.381 --- 10.0.0.2 ping statistics --- 00:38:58.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:58.381 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:58.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:58.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:38:58.381 00:38:58.381 --- 10.0.0.1 ping statistics --- 00:38:58.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:58.381 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # return 0 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:58.381 ************************************ 00:38:58.381 START TEST nvmf_digest_clean 00:38:58.381 ************************************ 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=2925797 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 2925797 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2925797 ']' 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:58.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:58.381 18:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:58.381 [2024-11-20 18:05:57.578963] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:38:58.381 [2024-11-20 18:05:57.579026] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:58.381 [2024-11-20 18:05:57.665356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.381 [2024-11-20 18:05:57.711870] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:58.381 [2024-11-20 18:05:57.711921] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:58.381 [2024-11-20 18:05:57.711929] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:58.381 [2024-11-20 18:05:57.711936] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:58.381 [2024-11-20 18:05:57.711943] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:58.381 [2024-11-20 18:05:57.711966] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:58.643 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:58.643 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:38:58.643 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:58.643 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:58.643 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:58.643 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:58.643 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:38:58.643 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:38:58.643 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:38:58.643 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.643 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:58.643 null0 00:38:58.643 [2024-11-20 18:05:58.525173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:58.643 [2024-11-20 18:05:58.549508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:58.643 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.643 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:38:58.643 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:58.643 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:58.643 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:38:58.904 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:38:58.904 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:38:58.904 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:58.904 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2926007 00:38:58.904 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2926007 /var/tmp/bperf.sock 00:38:58.904 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2926007 ']' 00:38:58.904 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:38:58.904 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:58.904 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:58.904 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:58.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:58.904 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:58.904 18:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:58.904 [2024-11-20 18:05:58.607951] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:38:58.904 [2024-11-20 18:05:58.608014] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2926007 ] 00:38:58.904 [2024-11-20 18:05:58.688030] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.904 [2024-11-20 18:05:58.735834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:59.863 18:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:59.863 18:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:38:59.863 18:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:59.863 18:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:59.863 18:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:59.863 18:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:59.863 18:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:00.434 nvme0n1 00:39:00.434 18:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:39:00.434 18:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:00.434 Running I/O for 2 seconds... 00:39:02.323 19070.00 IOPS, 74.49 MiB/s [2024-11-20T17:06:02.239Z] 20841.00 IOPS, 81.41 MiB/s 00:39:02.324 Latency(us) 00:39:02.324 [2024-11-20T17:06:02.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:02.324 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:02.324 nvme0n1 : 2.00 20862.73 81.50 0.00 0.00 6128.79 2293.76 18786.99 00:39:02.324 [2024-11-20T17:06:02.240Z] =================================================================================================================== 00:39:02.324 [2024-11-20T17:06:02.240Z] Total : 20862.73 81.50 0.00 0.00 6128.79 2293.76 18786.99 00:39:02.324 { 00:39:02.324 "results": [ 00:39:02.324 { 00:39:02.324 "job": "nvme0n1", 00:39:02.324 "core_mask": "0x2", 00:39:02.324 "workload": "randread", 00:39:02.324 "status": "finished", 00:39:02.324 "queue_depth": 128, 00:39:02.324 "io_size": 4096, 00:39:02.324 "runtime": 2.004052, 00:39:02.324 "iops": 20862.732104755763, 00:39:02.324 "mibps": 81.4950472842022, 00:39:02.324 "io_failed": 0, 00:39:02.324 "io_timeout": 0, 00:39:02.324 "avg_latency_us": 6128.790460974249, 00:39:02.324 "min_latency_us": 2293.76, 00:39:02.324 "max_latency_us": 18786.986666666668 00:39:02.324 } 00:39:02.324 ], 00:39:02.324 "core_count": 1 00:39:02.324 } 00:39:02.324 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:39:02.324 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:39:02.324 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:39:02.324 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:39:02.324 | select(.opcode=="crc32c") 00:39:02.324 | "\(.module_name) \(.executed)"' 00:39:02.324 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:02.585 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:39:02.585 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:39:02.585 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:39:02.585 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:39:02.585 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2926007 00:39:02.585 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2926007 ']' 00:39:02.585 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2926007 00:39:02.585 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:39:02.585 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:02.585 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2926007 00:39:02.585 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:02.585 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:02.585 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2926007' 00:39:02.585 killing process with pid 2926007 00:39:02.585 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2926007 00:39:02.585 Received shutdown signal, test time was about 2.000000 seconds 00:39:02.585 00:39:02.585 Latency(us) 00:39:02.585 [2024-11-20T17:06:02.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:02.586 [2024-11-20T17:06:02.502Z] =================================================================================================================== 00:39:02.586 [2024-11-20T17:06:02.502Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:02.586 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2926007 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2926924 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2926924 /var/tmp/bperf.sock 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2926924 ']' 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:02.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:02.847 [2024-11-20 18:06:02.612476] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:39:02.847 [2024-11-20 18:06:02.612534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2926924 ] 00:39:02.847 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:02.847 Zero copy mechanism will not be used. 00:39:02.847 [2024-11-20 18:06:02.686000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.847 [2024-11-20 18:06:02.714079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:39:02.847 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:03.107 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:03.107 18:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:03.367 nvme0n1 00:39:03.367 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:39:03.367 18:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:03.627 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:03.627 Zero copy mechanism will not be used. 00:39:03.627 Running I/O for 2 seconds... 00:39:05.507 3930.00 IOPS, 491.25 MiB/s [2024-11-20T17:06:05.423Z] 3566.00 IOPS, 445.75 MiB/s 00:39:05.507 Latency(us) 00:39:05.508 [2024-11-20T17:06:05.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:05.508 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:39:05.508 nvme0n1 : 2.00 3572.76 446.60 0.00 0.00 4474.73 682.67 7755.09 00:39:05.508 [2024-11-20T17:06:05.424Z] =================================================================================================================== 00:39:05.508 [2024-11-20T17:06:05.424Z] Total : 3572.76 446.60 0.00 0.00 4474.73 682.67 7755.09 00:39:05.508 { 00:39:05.508 "results": [ 00:39:05.508 { 00:39:05.508 "job": "nvme0n1", 00:39:05.508 "core_mask": "0x2", 00:39:05.508 "workload": "randread", 00:39:05.508 "status": "finished", 00:39:05.508 "queue_depth": 16, 00:39:05.508 "io_size": 131072, 00:39:05.508 "runtime": 2.003772, 00:39:05.508 "iops": 3572.7617712993297, 00:39:05.508 "mibps": 446.5952214124162, 00:39:05.508 "io_failed": 0, 00:39:05.508 "io_timeout": 0, 00:39:05.508 "avg_latency_us": 4474.731956977232, 00:39:05.508 "min_latency_us": 682.6666666666666, 00:39:05.508 "max_latency_us": 7755.093333333333 00:39:05.508 } 00:39:05.508 ], 00:39:05.508 "core_count": 1 00:39:05.508 } 00:39:05.508 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:39:05.508 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:39:05.508 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:39:05.508 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:39:05.508 | select(.opcode=="crc32c") 00:39:05.508 | "\(.module_name) \(.executed)"' 00:39:05.508 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2926924 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2926924 ']' 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2926924 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2926924 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2926924' 00:39:05.769 killing process with pid 2926924 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2926924 00:39:05.769 Received shutdown signal, test time was about 2.000000 seconds 00:39:05.769 00:39:05.769 Latency(us) 00:39:05.769 [2024-11-20T17:06:05.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:05.769 [2024-11-20T17:06:05.685Z] =================================================================================================================== 00:39:05.769 [2024-11-20T17:06:05.685Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2926924 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2927388 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2927388 /var/tmp/bperf.sock 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2927388 ']' 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:05.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:05.769 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:06.030 [2024-11-20 18:06:05.699385] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:39:06.030 [2024-11-20 18:06:05.699441] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2927388 ] 00:39:06.030 [2024-11-20 18:06:05.775473] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:06.030 [2024-11-20 18:06:05.803557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:06.030 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:06.030 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:39:06.030 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:39:06.030 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:39:06.030 18:06:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:06.290 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:06.290 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:06.860 nvme0n1 00:39:06.860 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:39:06.860 18:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:06.860 Running I/O for 2 seconds... 00:39:08.741 29688.00 IOPS, 115.97 MiB/s [2024-11-20T17:06:08.657Z] 29732.00 IOPS, 116.14 MiB/s 00:39:08.742 Latency(us) 00:39:08.742 [2024-11-20T17:06:08.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:08.742 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:08.742 nvme0n1 : 2.00 29735.16 116.15 0.00 0.00 4297.94 2157.23 15291.73 00:39:08.742 [2024-11-20T17:06:08.658Z] =================================================================================================================== 00:39:08.742 [2024-11-20T17:06:08.658Z] Total : 29735.16 116.15 0.00 0.00 4297.94 2157.23 15291.73 00:39:08.742 { 00:39:08.742 "results": [ 00:39:08.742 { 00:39:08.742 "job": "nvme0n1", 00:39:08.742 "core_mask": "0x2", 00:39:08.742 "workload": "randwrite", 00:39:08.742 "status": "finished", 00:39:08.742 "queue_depth": 128, 00:39:08.742 "io_size": 4096, 00:39:08.742 "runtime": 2.004092, 00:39:08.742 "iops": 29735.16185883682, 00:39:08.742 "mibps": 116.15297601108132, 00:39:08.742 "io_failed": 0, 00:39:08.742 "io_timeout": 0, 00:39:08.742 "avg_latency_us": 4297.93745558688, 00:39:08.742 "min_latency_us": 2157.2266666666665, 00:39:08.742 "max_latency_us": 15291.733333333334 00:39:08.742 } 00:39:08.742 ], 00:39:08.742 "core_count": 1 00:39:08.742 } 00:39:08.742 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:39:08.742 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:39:08.742 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:39:08.742 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:39:08.742 | select(.opcode=="crc32c") 00:39:08.742 | "\(.module_name) \(.executed)"' 00:39:08.742 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:09.003 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:39:09.003 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:39:09.003 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:39:09.003 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:39:09.003 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2927388 00:39:09.003 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2927388 ']' 00:39:09.003 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2927388 00:39:09.003 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:39:09.003 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:09.003 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2927388 00:39:09.003 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:09.003 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:09.003 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2927388' 00:39:09.003 killing process with pid 2927388 00:39:09.003 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2927388 00:39:09.003 Received shutdown signal, test time was about 2.000000 seconds 00:39:09.003 00:39:09.004 Latency(us) 00:39:09.004 [2024-11-20T17:06:08.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:09.004 [2024-11-20T17:06:08.920Z] =================================================================================================================== 00:39:09.004 [2024-11-20T17:06:08.920Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:09.004 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2927388 00:39:09.264 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:39:09.264 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:39:09.264 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:39:09.264 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:39:09.264 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:39:09.264 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:39:09.264 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:39:09.264 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2928043 00:39:09.264 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2928043 /var/tmp/bperf.sock 00:39:09.264 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2928043 ']' 00:39:09.264 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:39:09.264 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:09.264 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:09.264 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:09.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:09.264 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:09.264 18:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:09.264 [2024-11-20 18:06:09.008857] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:39:09.264 [2024-11-20 18:06:09.008914] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2928043 ] 00:39:09.264 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:09.264 Zero copy mechanism will not be used. 00:39:09.264 [2024-11-20 18:06:09.083967] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:09.264 [2024-11-20 18:06:09.112076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:09.264 18:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:09.264 18:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:39:09.264 18:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:39:09.264 18:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:39:09.264 18:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:09.524 18:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:09.524 18:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:09.783 nvme0n1 00:39:09.783 18:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:39:09.783 18:06:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:10.043 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:10.043 Zero copy mechanism will not be used. 00:39:10.043 Running I/O for 2 seconds... 00:39:11.926 4849.00 IOPS, 606.12 MiB/s [2024-11-20T17:06:11.842Z] 6170.00 IOPS, 771.25 MiB/s 00:39:11.926 Latency(us) 00:39:11.926 [2024-11-20T17:06:11.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.926 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:39:11.926 nvme0n1 : 2.01 6160.20 770.03 0.00 0.00 2591.65 1153.71 8028.16 00:39:11.926 [2024-11-20T17:06:11.842Z] =================================================================================================================== 00:39:11.926 [2024-11-20T17:06:11.842Z] Total : 6160.20 770.03 0.00 0.00 2591.65 1153.71 8028.16 00:39:11.926 { 00:39:11.926 "results": [ 00:39:11.926 { 00:39:11.926 "job": "nvme0n1", 00:39:11.926 "core_mask": "0x2", 00:39:11.926 "workload": "randwrite", 00:39:11.926 "status": "finished", 00:39:11.926 "queue_depth": 16, 00:39:11.926 "io_size": 131072, 00:39:11.926 "runtime": 2.006265, 00:39:11.926 "iops": 6160.203163590054, 00:39:11.926 "mibps": 770.0253954487567, 00:39:11.926 "io_failed": 0, 00:39:11.926 "io_timeout": 0, 00:39:11.926 "avg_latency_us": 2591.6510095207273, 00:39:11.926 "min_latency_us": 1153.7066666666667, 00:39:11.926 "max_latency_us": 8028.16 00:39:11.926 } 00:39:11.926 ], 00:39:11.926 "core_count": 1 00:39:11.926 } 00:39:11.926 18:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:39:11.926 18:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:39:11.926 18:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:39:11.926 18:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:39:11.926 | select(.opcode=="crc32c") 00:39:11.926 | "\(.module_name) \(.executed)"' 00:39:11.926 18:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:12.187 18:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:39:12.187 18:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:39:12.187 18:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:39:12.187 18:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:39:12.187 18:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2928043 00:39:12.187 18:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2928043 ']' 00:39:12.187 18:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2928043 00:39:12.187 18:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:39:12.187 18:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:12.187 18:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2928043 00:39:12.187 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:12.187 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:12.187 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2928043' 00:39:12.187 killing process with pid 2928043 00:39:12.187 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2928043 00:39:12.187 Received shutdown signal, test time was about 2.000000 seconds 00:39:12.187 00:39:12.187 Latency(us) 00:39:12.187 [2024-11-20T17:06:12.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:12.187 [2024-11-20T17:06:12.103Z] =================================================================================================================== 00:39:12.187 [2024-11-20T17:06:12.103Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:12.187 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2928043 00:39:12.447 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2925797 00:39:12.447 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2925797 ']' 00:39:12.447 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2925797 00:39:12.447 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:39:12.447 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:12.447 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2925797 00:39:12.447 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:12.447 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:12.447 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2925797' 00:39:12.447 killing process with pid 2925797 00:39:12.447 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2925797 00:39:12.447 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2925797 00:39:12.447 00:39:12.447 real 0m14.798s 00:39:12.447 user 0m28.863s 00:39:12.447 sys 0m3.620s 00:39:12.447 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:12.448 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:12.448 ************************************ 00:39:12.448 END TEST nvmf_digest_clean 00:39:12.448 ************************************ 00:39:12.448 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:39:12.448 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:12.448 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:12.448 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:39:12.708 ************************************ 00:39:12.708 START TEST nvmf_digest_error 00:39:12.708 ************************************ 00:39:12.708 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:39:12.708 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:39:12.708 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:12.708 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:12.708 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:12.708 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=2929095 00:39:12.708 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 2929095 00:39:12.708 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:39:12.708 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2929095 ']' 00:39:12.708 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:12.708 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:12.708 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:12.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:12.708 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:12.708 18:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:12.708 [2024-11-20 18:06:12.420895] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:39:12.708 [2024-11-20 18:06:12.420949] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:12.708 [2024-11-20 18:06:12.503296] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.708 [2024-11-20 18:06:12.531146] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:12.708 [2024-11-20 18:06:12.531185] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:12.708 [2024-11-20 18:06:12.531192] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:12.708 [2024-11-20 18:06:12.531196] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:12.708 [2024-11-20 18:06:12.531201] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:12.708 [2024-11-20 18:06:12.531219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:13.647 [2024-11-20 18:06:13.253184] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:13.647 null0 00:39:13.647 [2024-11-20 18:06:13.325092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:13.647 [2024-11-20 18:06:13.349291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2929433 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2929433 /var/tmp/bperf.sock 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2929433 ']' 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:13.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:13.647 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:13.647 [2024-11-20 18:06:13.404884] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:39:13.647 [2024-11-20 18:06:13.404932] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2929433 ] 00:39:13.647 [2024-11-20 18:06:13.478892] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.647 [2024-11-20 18:06:13.507230] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:13.908 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:13.908 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:39:13.908 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:13.908 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:13.908 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:13.908 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.908 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:13.908 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.908 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:13.908 18:06:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:14.480 nvme0n1 00:39:14.480 18:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:39:14.480 18:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.480 18:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:14.480 18:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.480 18:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:14.480 18:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:14.480 Running I/O for 2 seconds... 00:39:14.480 [2024-11-20 18:06:14.311800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.480 [2024-11-20 18:06:14.311829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.480 [2024-11-20 18:06:14.311839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.480 [2024-11-20 18:06:14.323323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.480 [2024-11-20 18:06:14.323343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.480 [2024-11-20 18:06:14.323351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.480 [2024-11-20 18:06:14.333081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.480 [2024-11-20 18:06:14.333100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.480 [2024-11-20 18:06:14.333107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.480 [2024-11-20 18:06:14.343911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.480 [2024-11-20 18:06:14.343929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.480 [2024-11-20 18:06:14.343936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.480 [2024-11-20 18:06:14.351388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.480 [2024-11-20 18:06:14.351406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.480 [2024-11-20 18:06:14.351413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.480 [2024-11-20 18:06:14.361500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.480 [2024-11-20 18:06:14.361518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.480 [2024-11-20 18:06:14.361525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.480 [2024-11-20 18:06:14.370981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.480 [2024-11-20 18:06:14.370999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.480 [2024-11-20 18:06:14.371006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.480 [2024-11-20 18:06:14.379541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.480 [2024-11-20 18:06:14.379559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.480 [2024-11-20 18:06:14.379565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.481 [2024-11-20 18:06:14.388805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.481 [2024-11-20 18:06:14.388822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.481 [2024-11-20 18:06:14.388829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.396586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.396603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.396613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.405996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.406014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.406020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.415368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.415386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.415393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.423804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.423822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.423829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.432611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.432628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.432635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.440954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.440972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.440978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.450848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.450866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.450872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.459081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.459098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.459104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.468476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.468493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.468500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.478129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.478151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.478163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.485771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.485788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.485794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.495295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.495312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.495319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.504619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.504637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.504643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.515184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.515201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.515207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.525842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.525858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.525865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.533668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.533685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.533692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.542998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.543015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.543022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.551371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.551389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.551395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.559944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.559962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.559968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.569247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.569264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.569270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.577933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.577951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.577957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.586700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.586718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.586724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.595718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.595735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.595742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.603759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.603776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.603783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.612139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.612157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.743 [2024-11-20 18:06:14.612170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.743 [2024-11-20 18:06:14.621980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.743 [2024-11-20 18:06:14.621997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.744 [2024-11-20 18:06:14.622003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.744 [2024-11-20 18:06:14.631583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.744 [2024-11-20 18:06:14.631600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.744 [2024-11-20 18:06:14.631610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.744 [2024-11-20 18:06:14.640448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.744 [2024-11-20 18:06:14.640466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.744 [2024-11-20 18:06:14.640472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:14.744 [2024-11-20 18:06:14.649463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:14.744 [2024-11-20 18:06:14.649480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:14.744 [2024-11-20 18:06:14.649486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.005 [2024-11-20 18:06:14.657260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.005 [2024-11-20 18:06:14.657278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.005 [2024-11-20 18:06:14.657284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.005 [2024-11-20 18:06:14.668659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.005 [2024-11-20 18:06:14.668676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.005 [2024-11-20 18:06:14.668683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.005 [2024-11-20 18:06:14.679469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.005 [2024-11-20 18:06:14.679487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.005 [2024-11-20 18:06:14.679493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.005 [2024-11-20 18:06:14.691367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.005 [2024-11-20 18:06:14.691384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.005 [2024-11-20 18:06:14.691391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.005 [2024-11-20 18:06:14.699065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.005 [2024-11-20 18:06:14.699082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.005 [2024-11-20 18:06:14.699088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.005 [2024-11-20 18:06:14.709950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.005 [2024-11-20 18:06:14.709967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.005 [2024-11-20 18:06:14.709974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.005 [2024-11-20 18:06:14.721472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.005 [2024-11-20 18:06:14.721489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.005 [2024-11-20 18:06:14.721495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.731314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.731331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.731337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.739784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.739801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.739807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.748425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.748442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.748448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.757378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.757395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.757401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.767944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.767961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.767967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.777263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.777280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.777287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.785750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.785767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.785773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.795430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.795448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.795457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.803976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.803993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.803999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.811936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.811953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.811960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.822192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.822209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.822216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.829929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.829946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.829952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.838782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.838799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.838805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.848203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.848219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.848226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.857498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.857515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.857521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.866324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.866341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.866347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.874916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.874936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.874942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.883186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.883203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.883209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.892243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.892260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.892267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.900918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.900935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.900942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.006 [2024-11-20 18:06:14.910317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.006 [2024-11-20 18:06:14.910334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.006 [2024-11-20 18:06:14.910340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.267 [2024-11-20 18:06:14.918854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:14.918872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:14.918878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:14.927115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:14.927132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:14.927138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:14.936640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:14.936657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:14.936663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:14.945292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:14.945310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:14.945316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:14.955118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:14.955135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:14.955141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:14.963790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:14.963807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:14.963814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:14.972760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:14.972777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:14.972784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:14.981191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:14.981208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:14.981215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:14.989058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:14.989075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:14.989081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:14.998486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:14.998503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:14.998509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:15.007978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:15.007995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:15.008002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:15.016094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:15.016111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:15.016118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:15.025281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:15.025299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:15.025308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:15.034483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:15.034500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:15.034506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:15.042632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:15.042649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:15.042655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:15.052352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:15.052369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:15.052375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:15.061768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:15.061786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:15.061792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:15.070753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:15.070770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:15.070776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:15.079584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:15.079600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:15.079606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:15.091855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:15.091872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:15.091879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:15.102967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:15.102984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:15.102991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:15.111456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:15.111477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:15.111483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:15.119870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:15.119887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:15.119893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:15.129313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:15.129330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:15.129337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:15.138440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:15.138457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:15.138463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:15.148606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:15.148623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:15.148629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.268 [2024-11-20 18:06:15.157907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.268 [2024-11-20 18:06:15.157924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.268 [2024-11-20 18:06:15.157930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.269 [2024-11-20 18:06:15.165360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.269 [2024-11-20 18:06:15.165377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.269 [2024-11-20 18:06:15.165383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.269 [2024-11-20 18:06:15.174633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.269 [2024-11-20 18:06:15.174650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.269 [2024-11-20 18:06:15.174657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.529 [2024-11-20 18:06:15.184903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.529 [2024-11-20 18:06:15.184921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.529 [2024-11-20 18:06:15.184927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.529 [2024-11-20 18:06:15.193381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.529 [2024-11-20 18:06:15.193398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.529 [2024-11-20 18:06:15.193404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.529 [2024-11-20 18:06:15.201658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.529 [2024-11-20 18:06:15.201675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.529 [2024-11-20 18:06:15.201681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.529 [2024-11-20 18:06:15.210721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.529 [2024-11-20 18:06:15.210738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.529 [2024-11-20 18:06:15.210744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.529 [2024-11-20 18:06:15.219188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.529 [2024-11-20 18:06:15.219205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.529 [2024-11-20 18:06:15.219211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.529 [2024-11-20 18:06:15.227878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.529 [2024-11-20 18:06:15.227895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.529 [2024-11-20 18:06:15.227901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.529 [2024-11-20 18:06:15.237128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.529 [2024-11-20 18:06:15.237145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.529 [2024-11-20 18:06:15.237151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.529 [2024-11-20 18:06:15.246020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.246037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.246043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.254927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.254944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.254951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.264345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.264362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.264372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.273295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.273311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.273317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.282370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.282387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.282393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.291482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.291499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.291505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 27579.00 IOPS, 107.73 MiB/s [2024-11-20T17:06:15.446Z] [2024-11-20 18:06:15.301327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.301344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.301350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.311066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.311083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.311089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.318239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.318255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.318261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.328705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.328722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.328728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.338360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.338377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.338383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.345766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.345782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.345788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.357748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.357765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.357771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.367862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.367879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.367885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.377499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.377515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.377521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.387227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.387244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.387250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.395467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.395484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.395490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.404245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.404262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.404269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.413438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.413455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.413461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.422481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.422497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.422507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.431000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.431017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.431023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.530 [2024-11-20 18:06:15.439962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.530 [2024-11-20 18:06:15.439979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.530 [2024-11-20 18:06:15.439985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.791 [2024-11-20 18:06:15.448968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.791 [2024-11-20 18:06:15.448985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.791 [2024-11-20 18:06:15.448992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.457414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.457431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.457437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.465976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.465992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.465999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.474196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.474212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.474218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.483443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.483459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.483466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.493392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.493408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.493415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.503063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.503083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.503091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.511083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.511099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.511105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.520271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.520287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.520293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.529606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.529623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.529629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.538810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.538826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.538832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.546881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.546898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.546904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.557903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.557920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.557926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.566259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.566275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.566281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.574514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.574531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.574537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.584503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.584519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.584526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.594282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.594299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.594305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.604031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.604048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.604054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.611790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.611807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.611813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.622115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.622132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.622138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.631297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.631314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.631320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.641248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.641265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.641271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.649685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.649701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.649707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.657796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.657813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.657825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.666289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.666305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.666312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.675396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.675412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.675418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.685204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.685221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.792 [2024-11-20 18:06:15.685227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.792 [2024-11-20 18:06:15.693551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.792 [2024-11-20 18:06:15.693567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.793 [2024-11-20 18:06:15.693573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:15.793 [2024-11-20 18:06:15.703695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:15.793 [2024-11-20 18:06:15.703712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:15.793 [2024-11-20 18:06:15.703718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.053 [2024-11-20 18:06:15.712446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.053 [2024-11-20 18:06:15.712463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.053 [2024-11-20 18:06:15.712469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.053 [2024-11-20 18:06:15.721063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.053 [2024-11-20 18:06:15.721079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.053 [2024-11-20 18:06:15.721086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.053 [2024-11-20 18:06:15.729370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.053 [2024-11-20 18:06:15.729386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.053 [2024-11-20 18:06:15.729393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.053 [2024-11-20 18:06:15.739259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.053 [2024-11-20 18:06:15.739280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.053 [2024-11-20 18:06:15.739286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.053 [2024-11-20 18:06:15.748260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.053 [2024-11-20 18:06:15.748277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.053 [2024-11-20 18:06:15.748283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.053 [2024-11-20 18:06:15.756393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.053 [2024-11-20 18:06:15.756410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.053 [2024-11-20 18:06:15.756417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.053 [2024-11-20 18:06:15.764768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.053 [2024-11-20 18:06:15.764785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.053 [2024-11-20 18:06:15.764791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.773676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.773693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.773699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.784523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.784539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.784545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.792700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.792716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.792722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.801700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.801716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.801723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.810073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.810091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.810097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.819539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.819556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.819562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.828219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.828236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.828243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.836867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.836884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.836890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.845776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.845792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.845799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.855513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.855530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.855536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.863643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.863659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.863665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.872481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.872497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.872503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.880994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.881010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.881017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.888876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.888896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.888903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.898528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.898545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.898551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.907146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.907165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.907172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.916217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.916233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.916239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.926178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.926195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.926201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.933474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.933491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.933497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.943977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.943994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.944000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.952354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.952371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.952377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.054 [2024-11-20 18:06:15.963481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.054 [2024-11-20 18:06:15.963498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.054 [2024-11-20 18:06:15.963504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:15.971972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:15.971989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:15.971996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:15.981110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:15.981127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:15.981133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:15.990234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:15.990251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:15.990257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:15.998166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:15.998182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:15.998188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:16.007661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:16.007678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:16.007684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:16.017977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:16.017994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:16.018000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:16.026832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:16.026849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:16.026855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:16.036172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:16.036188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:16.036195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:16.044661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:16.044678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:16.044687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:16.054151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:16.054171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:16.054177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:16.061994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:16.062010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:16.062017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:16.071398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:16.071414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:16.071421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:16.080614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:16.080630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:16.080637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:16.089325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:16.089342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:16.089348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:16.098447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:16.098463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:16.098470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:16.107950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:16.107968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:16.107974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:16.118004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:16.118020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:16.118027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:16.129838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:16.129858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:16.129864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:16.138634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:16.138650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:16.138657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:16.148217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:16.148234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.316 [2024-11-20 18:06:16.148241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.316 [2024-11-20 18:06:16.157600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.316 [2024-11-20 18:06:16.157617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.317 [2024-11-20 18:06:16.157624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.317 [2024-11-20 18:06:16.166574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.317 [2024-11-20 18:06:16.166591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.317 [2024-11-20 18:06:16.166597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.317 [2024-11-20 18:06:16.175125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.317 [2024-11-20 18:06:16.175142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.317 [2024-11-20 18:06:16.175148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.317 [2024-11-20 18:06:16.184169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.317 [2024-11-20 18:06:16.184186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.317 [2024-11-20 18:06:16.184192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.317 [2024-11-20 18:06:16.192704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.317 [2024-11-20 18:06:16.192722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.317 [2024-11-20 18:06:16.192728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.317 [2024-11-20 18:06:16.201675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.317 [2024-11-20 18:06:16.201692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.317 [2024-11-20 18:06:16.201699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.317 [2024-11-20 18:06:16.210556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.317 [2024-11-20 18:06:16.210572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.317 [2024-11-20 18:06:16.210578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.317 [2024-11-20 18:06:16.219386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.317 [2024-11-20 18:06:16.219403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.317 [2024-11-20 18:06:16.219409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.578 [2024-11-20 18:06:16.228426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.578 [2024-11-20 18:06:16.228444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.578 [2024-11-20 18:06:16.228450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.578 [2024-11-20 18:06:16.238643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.578 [2024-11-20 18:06:16.238660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.578 [2024-11-20 18:06:16.238666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.578 [2024-11-20 18:06:16.247064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.578 [2024-11-20 18:06:16.247081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.578 [2024-11-20 18:06:16.247088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.578 [2024-11-20 18:06:16.259041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.578 [2024-11-20 18:06:16.259058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.578 [2024-11-20 18:06:16.259065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.578 [2024-11-20 18:06:16.269733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.578 [2024-11-20 18:06:16.269751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.578 [2024-11-20 18:06:16.269757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.578 [2024-11-20 18:06:16.279112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.578 [2024-11-20 18:06:16.279128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.578 [2024-11-20 18:06:16.279135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.578 [2024-11-20 18:06:16.288652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.578 [2024-11-20 18:06:16.288668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.578 [2024-11-20 18:06:16.288679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.578 27767.00 IOPS, 108.46 MiB/s [2024-11-20T17:06:16.494Z] [2024-11-20 18:06:16.297424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xea2500) 00:39:16.578 [2024-11-20 18:06:16.297441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.578 [2024-11-20 18:06:16.297447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:16.578 00:39:16.578 Latency(us) 00:39:16.578 [2024-11-20T17:06:16.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:16.578 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:16.578 nvme0n1 : 2.00 27781.17 108.52 0.00 0.00 4602.50 2280.11 18786.99 00:39:16.578 [2024-11-20T17:06:16.494Z] =================================================================================================================== 00:39:16.578 [2024-11-20T17:06:16.494Z] Total : 27781.17 108.52 0.00 0.00 4602.50 2280.11 18786.99 00:39:16.578 { 00:39:16.578 "results": [ 00:39:16.578 { 00:39:16.578 "job": "nvme0n1", 00:39:16.578 "core_mask": "0x2", 00:39:16.578 "workload": "randread", 00:39:16.578 "status": "finished", 00:39:16.578 "queue_depth": 128, 00:39:16.578 "io_size": 4096, 00:39:16.578 "runtime": 2.003587, 00:39:16.578 "iops": 27781.174463599535, 00:39:16.578 "mibps": 108.52021274843568, 00:39:16.578 "io_failed": 0, 00:39:16.578 "io_timeout": 0, 00:39:16.578 "avg_latency_us": 4602.500107074845, 00:39:16.578 "min_latency_us": 2280.1066666666666, 00:39:16.578 "max_latency_us": 18786.986666666668 00:39:16.578 } 00:39:16.578 ], 00:39:16.578 "core_count": 1 00:39:16.578 } 00:39:16.578 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:16.578 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:16.578 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:16.578 | .driver_specific 00:39:16.578 | .nvme_error 00:39:16.578 | .status_code 00:39:16.578 | .command_transient_transport_error' 00:39:16.578 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:16.839 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:39:16.839 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2929433 00:39:16.839 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2929433 ']' 00:39:16.839 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2929433 00:39:16.839 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:39:16.839 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:16.839 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2929433 00:39:16.839 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2929433' 00:39:16.840 killing process with pid 2929433 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2929433 00:39:16.840 Received shutdown signal, test time was about 2.000000 seconds 00:39:16.840 00:39:16.840 Latency(us) 00:39:16.840 [2024-11-20T17:06:16.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:16.840 [2024-11-20T17:06:16.756Z] =================================================================================================================== 00:39:16.840 [2024-11-20T17:06:16.756Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2929433 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2929967 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2929967 /var/tmp/bperf.sock 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2929967 ']' 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:16.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:16.840 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:16.840 [2024-11-20 18:06:16.725615] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:39:16.840 [2024-11-20 18:06:16.725673] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2929967 ] 00:39:16.840 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:16.840 Zero copy mechanism will not be used. 00:39:17.100 [2024-11-20 18:06:16.798615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:17.100 [2024-11-20 18:06:16.826589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:17.100 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:17.100 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:39:17.100 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:17.100 18:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:17.361 18:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:17.361 18:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:17.361 18:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:17.361 18:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:17.361 18:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:17.361 18:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:17.620 nvme0n1 00:39:17.620 18:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:39:17.620 18:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:17.620 18:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:17.620 18:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:17.620 18:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:17.620 18:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:17.620 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:17.620 Zero copy mechanism will not be used. 00:39:17.620 Running I/O for 2 seconds... 00:39:17.620 [2024-11-20 18:06:17.457961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.620 [2024-11-20 18:06:17.457992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.620 [2024-11-20 18:06:17.458001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:17.620 [2024-11-20 18:06:17.470599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.620 [2024-11-20 18:06:17.470620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.620 [2024-11-20 18:06:17.470627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.621 [2024-11-20 18:06:17.482599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.621 [2024-11-20 18:06:17.482618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.621 [2024-11-20 18:06:17.482625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:17.621 [2024-11-20 18:06:17.495608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.621 [2024-11-20 18:06:17.495627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.621 [2024-11-20 18:06:17.495635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:17.621 [2024-11-20 18:06:17.508695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.621 [2024-11-20 18:06:17.508714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.621 [2024-11-20 18:06:17.508721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:17.621 [2024-11-20 18:06:17.519848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.621 [2024-11-20 18:06:17.519865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.621 [2024-11-20 18:06:17.519871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.621 [2024-11-20 18:06:17.532253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.621 [2024-11-20 18:06:17.532271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.621 [2024-11-20 18:06:17.532277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.543602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.543620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.543626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.554012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.554031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.554037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.562866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.562884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.562890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.571729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.571747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.571754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.583290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.583307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.583313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.594010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.594028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.594034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.605752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.605770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.605776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.614432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.614450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.614460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.621711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.621729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.621735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.630975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.630992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.630998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.639746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.639764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.639770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.649676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.649693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.649700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.655950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.655967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.655974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.667066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.667083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.667090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.679262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.679279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.679285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.689938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.689956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.689962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.702015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.702038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.702044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.714003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.714021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.714027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.721297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.721314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.721320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.728241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.728259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.728265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.734892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.734909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.734915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.740550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.740568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.740574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.747890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.747908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.747914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.758798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.758815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.758821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.770802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.770819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.882 [2024-11-20 18:06:17.770825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:17.882 [2024-11-20 18:06:17.780860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.882 [2024-11-20 18:06:17.780877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.883 [2024-11-20 18:06:17.780883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:17.883 [2024-11-20 18:06:17.789216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:17.883 [2024-11-20 18:06:17.789232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:17.883 [2024-11-20 18:06:17.789239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.143 [2024-11-20 18:06:17.798528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.798545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.798551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.811186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.811203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.811210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.818241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.818258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.818264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.829327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.829345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.829351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.839889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.839908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.839914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.847920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.847938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.847945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.859299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.859317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.859326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.865425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.865444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.865450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.874216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.874234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.874241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.883373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.883392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.883398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.892917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.892936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.892943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.904166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.904184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.904191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.916012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.916031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.916037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.927849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.927866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.927873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.938771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.938789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.938796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.950000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.950017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.950024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.959949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.959967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.959974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.967849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.967867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.967873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.978279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.978297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.978303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.988413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.988432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.988438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:17.999186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:17.999205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:17.999211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:18.009621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:18.009639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:18.009645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:18.016662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:18.016680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:18.016686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:18.023724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:18.023742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:18.023752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:18.032692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:18.032712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:18.032718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:18.042744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.144 [2024-11-20 18:06:18.042762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.144 [2024-11-20 18:06:18.042768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.144 [2024-11-20 18:06:18.050496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.145 [2024-11-20 18:06:18.050515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.145 [2024-11-20 18:06:18.050522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.406 [2024-11-20 18:06:18.062088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.406 [2024-11-20 18:06:18.062106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.406 [2024-11-20 18:06:18.062113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.406 [2024-11-20 18:06:18.074007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.406 [2024-11-20 18:06:18.074026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.406 [2024-11-20 18:06:18.074033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.406 [2024-11-20 18:06:18.083982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.406 [2024-11-20 18:06:18.084001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.406 [2024-11-20 18:06:18.084007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.406 [2024-11-20 18:06:18.093821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.406 [2024-11-20 18:06:18.093840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.406 [2024-11-20 18:06:18.093846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.406 [2024-11-20 18:06:18.101486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.406 [2024-11-20 18:06:18.101504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.406 [2024-11-20 18:06:18.101511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.406 [2024-11-20 18:06:18.112034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.406 [2024-11-20 18:06:18.112056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.406 [2024-11-20 18:06:18.112062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.406 [2024-11-20 18:06:18.121952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.406 [2024-11-20 18:06:18.121970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.406 [2024-11-20 18:06:18.121976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.406 [2024-11-20 18:06:18.130016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.406 [2024-11-20 18:06:18.130034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.406 [2024-11-20 18:06:18.130040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.406 [2024-11-20 18:06:18.140979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.406 [2024-11-20 18:06:18.140998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.406 [2024-11-20 18:06:18.141004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.406 [2024-11-20 18:06:18.150954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.406 [2024-11-20 18:06:18.150973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.406 [2024-11-20 18:06:18.150980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.406 [2024-11-20 18:06:18.161944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.406 [2024-11-20 18:06:18.161962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.406 [2024-11-20 18:06:18.161968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.406 [2024-11-20 18:06:18.171343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.406 [2024-11-20 18:06:18.171362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.407 [2024-11-20 18:06:18.171368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.407 [2024-11-20 18:06:18.181641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.407 [2024-11-20 18:06:18.181659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.407 [2024-11-20 18:06:18.181665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.407 [2024-11-20 18:06:18.191684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.407 [2024-11-20 18:06:18.191702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.407 [2024-11-20 18:06:18.191708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.407 [2024-11-20 18:06:18.203107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.407 [2024-11-20 18:06:18.203126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.407 [2024-11-20 18:06:18.203132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.407 [2024-11-20 18:06:18.215495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.407 [2024-11-20 18:06:18.215514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.407 [2024-11-20 18:06:18.215520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.407 [2024-11-20 18:06:18.227390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.407 [2024-11-20 18:06:18.227409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.407 [2024-11-20 18:06:18.227415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.407 [2024-11-20 18:06:18.239175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.407 [2024-11-20 18:06:18.239193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.407 [2024-11-20 18:06:18.239200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.407 [2024-11-20 18:06:18.251309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.407 [2024-11-20 18:06:18.251328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.407 [2024-11-20 18:06:18.251335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.407 [2024-11-20 18:06:18.262631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.407 [2024-11-20 18:06:18.262650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.407 [2024-11-20 18:06:18.262656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.407 [2024-11-20 18:06:18.274128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.407 [2024-11-20 18:06:18.274147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.407 [2024-11-20 18:06:18.274153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.407 [2024-11-20 18:06:18.286053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.407 [2024-11-20 18:06:18.286073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.407 [2024-11-20 18:06:18.286079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.407 [2024-11-20 18:06:18.294477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.407 [2024-11-20 18:06:18.294496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.407 [2024-11-20 18:06:18.294505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.407 [2024-11-20 18:06:18.305270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.407 [2024-11-20 18:06:18.305289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.407 [2024-11-20 18:06:18.305295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.407 [2024-11-20 18:06:18.317824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.407 [2024-11-20 18:06:18.317844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.407 [2024-11-20 18:06:18.317850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.668 [2024-11-20 18:06:18.329256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.668 [2024-11-20 18:06:18.329275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.668 [2024-11-20 18:06:18.329282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.668 [2024-11-20 18:06:18.340808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.668 [2024-11-20 18:06:18.340826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.668 [2024-11-20 18:06:18.340832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.668 [2024-11-20 18:06:18.351422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.668 [2024-11-20 18:06:18.351439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.668 [2024-11-20 18:06:18.351446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.668 [2024-11-20 18:06:18.362783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.668 [2024-11-20 18:06:18.362802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.668 [2024-11-20 18:06:18.362808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.668 [2024-11-20 18:06:18.371355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.668 [2024-11-20 18:06:18.371373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.668 [2024-11-20 18:06:18.371379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.668 [2024-11-20 18:06:18.373987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.668 [2024-11-20 18:06:18.374005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.668 [2024-11-20 18:06:18.374011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.668 [2024-11-20 18:06:18.383830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.668 [2024-11-20 18:06:18.383850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.668 [2024-11-20 18:06:18.383857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.668 [2024-11-20 18:06:18.393337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.668 [2024-11-20 18:06:18.393355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.668 [2024-11-20 18:06:18.393362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.668 [2024-11-20 18:06:18.405536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.668 [2024-11-20 18:06:18.405555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.405562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.669 [2024-11-20 18:06:18.417456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.669 [2024-11-20 18:06:18.417474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.417481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.669 [2024-11-20 18:06:18.429323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.669 [2024-11-20 18:06:18.429342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.429348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.669 [2024-11-20 18:06:18.440539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.669 [2024-11-20 18:06:18.440556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.440563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.669 3039.00 IOPS, 379.88 MiB/s [2024-11-20T17:06:18.585Z] [2024-11-20 18:06:18.453321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.669 [2024-11-20 18:06:18.453340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.453346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.669 [2024-11-20 18:06:18.462881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.669 [2024-11-20 18:06:18.462899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.462906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.669 [2024-11-20 18:06:18.474188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.669 [2024-11-20 18:06:18.474206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.474213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.669 [2024-11-20 18:06:18.482402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.669 [2024-11-20 18:06:18.482420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.482426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.669 [2024-11-20 18:06:18.493582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.669 [2024-11-20 18:06:18.493600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.493606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.669 [2024-11-20 18:06:18.504025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.669 [2024-11-20 18:06:18.504043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.504050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.669 [2024-11-20 18:06:18.513711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.669 [2024-11-20 18:06:18.513729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.513735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.669 [2024-11-20 18:06:18.523280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.669 [2024-11-20 18:06:18.523298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.523305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.669 [2024-11-20 18:06:18.534236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.669 [2024-11-20 18:06:18.534254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.534261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.669 [2024-11-20 18:06:18.541385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.669 [2024-11-20 18:06:18.541403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.541409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.669 [2024-11-20 18:06:18.550777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.669 [2024-11-20 18:06:18.550795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.550801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.669 [2024-11-20 18:06:18.560808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.669 [2024-11-20 18:06:18.560827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.560837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.669 [2024-11-20 18:06:18.570634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.669 [2024-11-20 18:06:18.570652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.570659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.669 [2024-11-20 18:06:18.580499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.669 [2024-11-20 18:06:18.580518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.669 [2024-11-20 18:06:18.580524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.930 [2024-11-20 18:06:18.591289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.930 [2024-11-20 18:06:18.591308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.930 [2024-11-20 18:06:18.591314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.930 [2024-11-20 18:06:18.599169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.930 [2024-11-20 18:06:18.599187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.930 [2024-11-20 18:06:18.599194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.930 [2024-11-20 18:06:18.607241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.930 [2024-11-20 18:06:18.607259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.930 [2024-11-20 18:06:18.607266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.930 [2024-11-20 18:06:18.617475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.930 [2024-11-20 18:06:18.617494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.930 [2024-11-20 18:06:18.617501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.930 [2024-11-20 18:06:18.628482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.930 [2024-11-20 18:06:18.628502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.930 [2024-11-20 18:06:18.628510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.930 [2024-11-20 18:06:18.637126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.930 [2024-11-20 18:06:18.637145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.930 [2024-11-20 18:06:18.637152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.930 [2024-11-20 18:06:18.647896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.930 [2024-11-20 18:06:18.647915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.930 [2024-11-20 18:06:18.647921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.930 [2024-11-20 18:06:18.659329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.930 [2024-11-20 18:06:18.659349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.930 [2024-11-20 18:06:18.659355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.930 [2024-11-20 18:06:18.669717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.930 [2024-11-20 18:06:18.669736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.930 [2024-11-20 18:06:18.669742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.930 [2024-11-20 18:06:18.677731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.930 [2024-11-20 18:06:18.677749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.930 [2024-11-20 18:06:18.677755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.930 [2024-11-20 18:06:18.686969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.930 [2024-11-20 18:06:18.686987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.930 [2024-11-20 18:06:18.686994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.930 [2024-11-20 18:06:18.698482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.930 [2024-11-20 18:06:18.698501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.930 [2024-11-20 18:06:18.698508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.930 [2024-11-20 18:06:18.707008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.931 [2024-11-20 18:06:18.707026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.931 [2024-11-20 18:06:18.707033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.931 [2024-11-20 18:06:18.718015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.931 [2024-11-20 18:06:18.718034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.931 [2024-11-20 18:06:18.718040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.931 [2024-11-20 18:06:18.726629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.931 [2024-11-20 18:06:18.726647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.931 [2024-11-20 18:06:18.726657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.931 [2024-11-20 18:06:18.736740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.931 [2024-11-20 18:06:18.736758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.931 [2024-11-20 18:06:18.736765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.931 [2024-11-20 18:06:18.745389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.931 [2024-11-20 18:06:18.745408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.931 [2024-11-20 18:06:18.745415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.931 [2024-11-20 18:06:18.753426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.931 [2024-11-20 18:06:18.753445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.931 [2024-11-20 18:06:18.753451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.931 [2024-11-20 18:06:18.764602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.931 [2024-11-20 18:06:18.764621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.931 [2024-11-20 18:06:18.764627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.931 [2024-11-20 18:06:18.774996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.931 [2024-11-20 18:06:18.775014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.931 [2024-11-20 18:06:18.775021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.931 [2024-11-20 18:06:18.781708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.931 [2024-11-20 18:06:18.781726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.931 [2024-11-20 18:06:18.781732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.931 [2024-11-20 18:06:18.793994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.931 [2024-11-20 18:06:18.794011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.931 [2024-11-20 18:06:18.794017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:18.931 [2024-11-20 18:06:18.806098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.931 [2024-11-20 18:06:18.806116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.931 [2024-11-20 18:06:18.806123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:18.931 [2024-11-20 18:06:18.816191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.931 [2024-11-20 18:06:18.816213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.931 [2024-11-20 18:06:18.816219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:18.931 [2024-11-20 18:06:18.825952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.931 [2024-11-20 18:06:18.825971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.931 [2024-11-20 18:06:18.825977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:18.931 [2024-11-20 18:06:18.836636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:18.931 [2024-11-20 18:06:18.836655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:18.931 [2024-11-20 18:06:18.836661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.192 [2024-11-20 18:06:18.847928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.192 [2024-11-20 18:06:18.847946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.192 [2024-11-20 18:06:18.847953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.192 [2024-11-20 18:06:18.860150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.192 [2024-11-20 18:06:18.860172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.192 [2024-11-20 18:06:18.860178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.192 [2024-11-20 18:06:18.872538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.192 [2024-11-20 18:06:18.872557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.192 [2024-11-20 18:06:18.872563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.192 [2024-11-20 18:06:18.883112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.192 [2024-11-20 18:06:18.883131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.192 [2024-11-20 18:06:18.883138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.192 [2024-11-20 18:06:18.893706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.192 [2024-11-20 18:06:18.893724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.192 [2024-11-20 18:06:18.893731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.192 [2024-11-20 18:06:18.902920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.192 [2024-11-20 18:06:18.902939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.192 [2024-11-20 18:06:18.902945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.192 [2024-11-20 18:06:18.912610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.192 [2024-11-20 18:06:18.912628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.192 [2024-11-20 18:06:18.912634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.192 [2024-11-20 18:06:18.924105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.192 [2024-11-20 18:06:18.924123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.192 [2024-11-20 18:06:18.924130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.192 [2024-11-20 18:06:18.930550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.192 [2024-11-20 18:06:18.930567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.192 [2024-11-20 18:06:18.930574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.192 [2024-11-20 18:06:18.942790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.192 [2024-11-20 18:06:18.942808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.192 [2024-11-20 18:06:18.942814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.192 [2024-11-20 18:06:18.956043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.192 [2024-11-20 18:06:18.956061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.192 [2024-11-20 18:06:18.956068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.192 [2024-11-20 18:06:18.968803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.192 [2024-11-20 18:06:18.968822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.192 [2024-11-20 18:06:18.968828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.192 [2024-11-20 18:06:18.981427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.192 [2024-11-20 18:06:18.981445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.192 [2024-11-20 18:06:18.981451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.192 [2024-11-20 18:06:18.993285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.192 [2024-11-20 18:06:18.993303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.192 [2024-11-20 18:06:18.993309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.192 [2024-11-20 18:06:19.005916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.192 [2024-11-20 18:06:19.005934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.192 [2024-11-20 18:06:19.005946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.192 [2024-11-20 18:06:19.018216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.192 [2024-11-20 18:06:19.018234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.192 [2024-11-20 18:06:19.018240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.192 [2024-11-20 18:06:19.030289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.192 [2024-11-20 18:06:19.030308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.193 [2024-11-20 18:06:19.030314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.193 [2024-11-20 18:06:19.041593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.193 [2024-11-20 18:06:19.041612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.193 [2024-11-20 18:06:19.041618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.193 [2024-11-20 18:06:19.053625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.193 [2024-11-20 18:06:19.053644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.193 [2024-11-20 18:06:19.053650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.193 [2024-11-20 18:06:19.065628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.193 [2024-11-20 18:06:19.065646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.193 [2024-11-20 18:06:19.065652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.193 [2024-11-20 18:06:19.074770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.193 [2024-11-20 18:06:19.074788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.193 [2024-11-20 18:06:19.074794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.193 [2024-11-20 18:06:19.079419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.193 [2024-11-20 18:06:19.079437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.193 [2024-11-20 18:06:19.079444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.193 [2024-11-20 18:06:19.085833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.193 [2024-11-20 18:06:19.085851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.193 [2024-11-20 18:06:19.085858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.193 [2024-11-20 18:06:19.093299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.193 [2024-11-20 18:06:19.093321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.193 [2024-11-20 18:06:19.093327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.193 [2024-11-20 18:06:19.100793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.193 [2024-11-20 18:06:19.100811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.193 [2024-11-20 18:06:19.100817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.108674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.108693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.108699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.119877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.119896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.119902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.130826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.130845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.130852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.140868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.140886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.140893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.148356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.148374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.148381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.153514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.153532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.153539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.163257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.163276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.163283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.171172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.171191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.171197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.178319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.178337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.178343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.185285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.185303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.185309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.190251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.190270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.190277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.195248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.195266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.195272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.200911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.200930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.200938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.209959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.209977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.209983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.218424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.218442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.218448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.225713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.225732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.225744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.236432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.236450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.236457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.246454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.246472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.246479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.253908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.253927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.253935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.263013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.263031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.263038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.270200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.270217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.270224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.275069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.275087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.275093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.282442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.282460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.282467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.290039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.290058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.453 [2024-11-20 18:06:19.290064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.453 [2024-11-20 18:06:19.299539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.453 [2024-11-20 18:06:19.299561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.454 [2024-11-20 18:06:19.299567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.454 [2024-11-20 18:06:19.309150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.454 [2024-11-20 18:06:19.309174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.454 [2024-11-20 18:06:19.309181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.454 [2024-11-20 18:06:19.320166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.454 [2024-11-20 18:06:19.320184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.454 [2024-11-20 18:06:19.320190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.454 [2024-11-20 18:06:19.328116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.454 [2024-11-20 18:06:19.328134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.454 [2024-11-20 18:06:19.328141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.454 [2024-11-20 18:06:19.337385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.454 [2024-11-20 18:06:19.337403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.454 [2024-11-20 18:06:19.337409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.454 [2024-11-20 18:06:19.344807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.454 [2024-11-20 18:06:19.344826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.454 [2024-11-20 18:06:19.344832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.454 [2024-11-20 18:06:19.350663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.454 [2024-11-20 18:06:19.350682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.454 [2024-11-20 18:06:19.350688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.454 [2024-11-20 18:06:19.359135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.454 [2024-11-20 18:06:19.359154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.454 [2024-11-20 18:06:19.359164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.454 [2024-11-20 18:06:19.364276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.454 [2024-11-20 18:06:19.364293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.454 [2024-11-20 18:06:19.364300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.713 [2024-11-20 18:06:19.372148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.713 [2024-11-20 18:06:19.372171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.713 [2024-11-20 18:06:19.372177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.713 [2024-11-20 18:06:19.377303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.713 [2024-11-20 18:06:19.377321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.713 [2024-11-20 18:06:19.377327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.713 [2024-11-20 18:06:19.385695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.713 [2024-11-20 18:06:19.385712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.713 [2024-11-20 18:06:19.385718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.713 [2024-11-20 18:06:19.394133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.713 [2024-11-20 18:06:19.394151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.713 [2024-11-20 18:06:19.394162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.713 [2024-11-20 18:06:19.402205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.713 [2024-11-20 18:06:19.402224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.713 [2024-11-20 18:06:19.402231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.713 [2024-11-20 18:06:19.409654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.713 [2024-11-20 18:06:19.409672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.713 [2024-11-20 18:06:19.409678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.713 [2024-11-20 18:06:19.415211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.713 [2024-11-20 18:06:19.415228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.713 [2024-11-20 18:06:19.415235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.713 [2024-11-20 18:06:19.422045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.713 [2024-11-20 18:06:19.422063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.713 [2024-11-20 18:06:19.422069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.713 [2024-11-20 18:06:19.428252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.713 [2024-11-20 18:06:19.428270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.713 [2024-11-20 18:06:19.428280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.713 [2024-11-20 18:06:19.438110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.713 [2024-11-20 18:06:19.438128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.713 [2024-11-20 18:06:19.438134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.713 [2024-11-20 18:06:19.443746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.714 [2024-11-20 18:06:19.443764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.714 [2024-11-20 18:06:19.443770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.714 3212.00 IOPS, 401.50 MiB/s [2024-11-20T17:06:19.630Z] [2024-11-20 18:06:19.448917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd76f80) 00:39:19.714 [2024-11-20 18:06:19.448935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.714 [2024-11-20 18:06:19.448941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.714 00:39:19.714 Latency(us) 00:39:19.714 [2024-11-20T17:06:19.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:19.714 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:39:19.714 nvme0n1 : 2.00 3220.50 402.56 0.00 0.00 4963.66 421.55 14527.15 00:39:19.714 [2024-11-20T17:06:19.630Z] =================================================================================================================== 00:39:19.714 [2024-11-20T17:06:19.630Z] Total : 3220.50 402.56 0.00 0.00 4963.66 421.55 14527.15 00:39:19.714 { 00:39:19.714 "results": [ 00:39:19.714 { 00:39:19.714 "job": "nvme0n1", 00:39:19.714 "core_mask": "0x2", 00:39:19.714 "workload": "randread", 00:39:19.714 "status": "finished", 00:39:19.714 "queue_depth": 16, 00:39:19.714 "io_size": 131072, 00:39:19.714 "runtime": 2.003727, 00:39:19.714 "iops": 3220.4986008573023, 00:39:19.714 "mibps": 402.5623251071628, 00:39:19.714 "io_failed": 0, 00:39:19.714 "io_timeout": 0, 00:39:19.714 "avg_latency_us": 4963.65909809391, 00:39:19.714 "min_latency_us": 421.5466666666667, 00:39:19.714 "max_latency_us": 14527.146666666667 00:39:19.714 } 00:39:19.714 ], 00:39:19.714 "core_count": 1 00:39:19.714 } 00:39:19.714 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:19.714 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:19.714 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:19.714 | .driver_specific 00:39:19.714 | .nvme_error 00:39:19.714 | .status_code 00:39:19.714 | .command_transient_transport_error' 00:39:19.714 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:19.974 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:39:19.974 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2929967 00:39:19.974 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2929967 ']' 00:39:19.974 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2929967 00:39:19.974 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:39:19.974 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:19.974 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2929967 00:39:19.974 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:19.974 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:19.974 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2929967' 00:39:19.974 killing process with pid 2929967 00:39:19.974 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2929967 00:39:19.974 Received shutdown signal, test time was about 2.000000 seconds 00:39:19.974 00:39:19.975 Latency(us) 00:39:19.975 [2024-11-20T17:06:19.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:19.975 [2024-11-20T17:06:19.891Z] =================================================================================================================== 00:39:19.975 [2024-11-20T17:06:19.891Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:19.975 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2929967 00:39:19.975 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:39:19.975 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:19.975 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:39:19.975 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:39:19.975 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:39:19.975 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2930473 00:39:19.975 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2930473 /var/tmp/bperf.sock 00:39:19.975 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2930473 ']' 00:39:19.975 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:39:19.975 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:19.975 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:19.975 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:19.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:19.975 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:19.975 18:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:20.236 [2024-11-20 18:06:19.893319] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:39:20.236 [2024-11-20 18:06:19.893376] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2930473 ] 00:39:20.236 [2024-11-20 18:06:19.967623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:20.236 [2024-11-20 18:06:19.995671] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:20.236 18:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:20.236 18:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:39:20.236 18:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:20.236 18:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:20.497 18:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:20.497 18:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.497 18:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:20.497 18:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.497 18:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:20.497 18:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:20.758 nvme0n1 00:39:20.758 18:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:39:20.758 18:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.758 18:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:20.758 18:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.758 18:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:20.758 18:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:20.758 Running I/O for 2 seconds... 00:39:20.758 [2024-11-20 18:06:20.624252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f6cc8 00:39:20.758 [2024-11-20 18:06:20.624986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:20.758 [2024-11-20 18:06:20.625012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:20.758 [2024-11-20 18:06:20.633107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e1b48 00:39:20.758 [2024-11-20 18:06:20.633841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:20.758 [2024-11-20 18:06:20.633860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:20.758 [2024-11-20 18:06:20.641601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f46d0 00:39:20.758 [2024-11-20 18:06:20.642368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:20.758 [2024-11-20 18:06:20.642385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:20.758 [2024-11-20 18:06:20.652192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fcdd0 00:39:20.758 [2024-11-20 18:06:20.653627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:20.758 [2024-11-20 18:06:20.653642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:39:20.758 [2024-11-20 18:06:20.660096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f0350 00:39:20.758 [2024-11-20 18:06:20.661201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:20.758 [2024-11-20 18:06:20.661221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:20.758 [2024-11-20 18:06:20.668492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e12d8 00:39:20.758 [2024-11-20 18:06:20.669598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:20.758 [2024-11-20 18:06:20.669614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.023 [2024-11-20 18:06:20.676962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e95a0 00:39:21.023 [2024-11-20 18:06:20.678055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.023 [2024-11-20 18:06:20.678070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.023 [2024-11-20 18:06:20.685427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ea680 00:39:21.023 [2024-11-20 18:06:20.686530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.023 [2024-11-20 18:06:20.686546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.023 [2024-11-20 18:06:20.693885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198eb760 00:39:21.023 [2024-11-20 18:06:20.694953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.023 [2024-11-20 18:06:20.694969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.023 [2024-11-20 18:06:20.702432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fa3a0 00:39:21.023 [2024-11-20 18:06:20.703524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.023 [2024-11-20 18:06:20.703540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.023 [2024-11-20 18:06:20.710889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198eee38 00:39:21.023 [2024-11-20 18:06:20.711971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.023 [2024-11-20 18:06:20.711987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.023 [2024-11-20 18:06:20.719337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198edd58 00:39:21.023 [2024-11-20 18:06:20.720398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.023 [2024-11-20 18:06:20.720414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.023 [2024-11-20 18:06:20.727786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ecc78 00:39:21.023 [2024-11-20 18:06:20.728867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.023 [2024-11-20 18:06:20.728882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.023 [2024-11-20 18:06:20.736262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f3a28 00:39:21.023 [2024-11-20 18:06:20.737354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.023 [2024-11-20 18:06:20.737370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.023 [2024-11-20 18:06:20.744696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f5be8 00:39:21.023 [2024-11-20 18:06:20.745784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.023 [2024-11-20 18:06:20.745800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.023 [2024-11-20 18:06:20.753124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f7da8 00:39:21.023 [2024-11-20 18:06:20.754214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.023 [2024-11-20 18:06:20.754230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.023 [2024-11-20 18:06:20.761573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f6cc8 00:39:21.023 [2024-11-20 18:06:20.762671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.023 [2024-11-20 18:06:20.762687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.023 [2024-11-20 18:06:20.770022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f8e88 00:39:21.023 [2024-11-20 18:06:20.771105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.023 [2024-11-20 18:06:20.771120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.023 [2024-11-20 18:06:20.778471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f31b8 00:39:21.023 [2024-11-20 18:06:20.779537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.023 [2024-11-20 18:06:20.779554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.023 [2024-11-20 18:06:20.786901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f20d8 00:39:21.024 [2024-11-20 18:06:20.787987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.788003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.024 [2024-11-20 18:06:20.795340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f0ff8 00:39:21.024 [2024-11-20 18:06:20.796435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.796451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.024 [2024-11-20 18:06:20.803776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198eff18 00:39:21.024 [2024-11-20 18:06:20.804864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.804881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.024 [2024-11-20 18:06:20.812230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e9168 00:39:21.024 [2024-11-20 18:06:20.813316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.813332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.024 [2024-11-20 18:06:20.820685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ea248 00:39:21.024 [2024-11-20 18:06:20.821785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.821800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.024 [2024-11-20 18:06:20.829165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198eb328 00:39:21.024 [2024-11-20 18:06:20.830229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.830245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.024 [2024-11-20 18:06:20.837609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f9f68 00:39:21.024 [2024-11-20 18:06:20.838717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.838732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.024 [2024-11-20 18:06:20.846052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fb048 00:39:21.024 [2024-11-20 18:06:20.847156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.847174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.024 [2024-11-20 18:06:20.854497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ee190 00:39:21.024 [2024-11-20 18:06:20.855574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.855590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.024 [2024-11-20 18:06:20.862949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ed0b0 00:39:21.024 [2024-11-20 18:06:20.864036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.864053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.024 [2024-11-20 18:06:20.871403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ebfd0 00:39:21.024 [2024-11-20 18:06:20.872483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.872500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.024 [2024-11-20 18:06:20.879829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f6020 00:39:21.024 [2024-11-20 18:06:20.880927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.880946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.024 [2024-11-20 18:06:20.888262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f4f40 00:39:21.024 [2024-11-20 18:06:20.889359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.889374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.024 [2024-11-20 18:06:20.896696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f7100 00:39:21.024 [2024-11-20 18:06:20.897792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.897808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.024 [2024-11-20 18:06:20.906242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f92c0 00:39:21.024 [2024-11-20 18:06:20.907792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.907808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:21.024 [2024-11-20 18:06:20.912238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f2948 00:39:21.024 [2024-11-20 18:06:20.912972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.912988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:39:21.024 [2024-11-20 18:06:20.920831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f1868 00:39:21.024 [2024-11-20 18:06:20.921583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.921599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.024 [2024-11-20 18:06:20.929273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f0788 00:39:21.024 [2024-11-20 18:06:20.930028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.024 [2024-11-20 18:06:20.930045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.288 [2024-11-20 18:06:20.937711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e0ea0 00:39:21.288 [2024-11-20 18:06:20.938464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.288 [2024-11-20 18:06:20.938480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.288 [2024-11-20 18:06:20.946147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198de8a8 00:39:21.288 [2024-11-20 18:06:20.946858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.288 [2024-11-20 18:06:20.946874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.288 [2024-11-20 18:06:20.954584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198df988 00:39:21.288 [2024-11-20 18:06:20.955357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.288 [2024-11-20 18:06:20.955374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.288 [2024-11-20 18:06:20.963056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e0a68 00:39:21.288 [2024-11-20 18:06:20.963828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.288 [2024-11-20 18:06:20.963845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.288 [2024-11-20 18:06:20.971492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fc998 00:39:21.288 [2024-11-20 18:06:20.972251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.288 [2024-11-20 18:06:20.972267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.288 [2024-11-20 18:06:20.979925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e7818 00:39:21.288 [2024-11-20 18:06:20.980692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.288 [2024-11-20 18:06:20.980709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.288 [2024-11-20 18:06:20.988367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e88f8 00:39:21.288 [2024-11-20 18:06:20.989116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.288 [2024-11-20 18:06:20.989132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.288 [2024-11-20 18:06:20.996822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ff3c8 00:39:21.288 [2024-11-20 18:06:20.997551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.288 [2024-11-20 18:06:20.997567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.288 [2024-11-20 18:06:21.005263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fe720 00:39:21.288 [2024-11-20 18:06:21.006013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.288 [2024-11-20 18:06:21.006029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.288 [2024-11-20 18:06:21.013698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f46d0 00:39:21.288 [2024-11-20 18:06:21.014427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.288 [2024-11-20 18:06:21.014443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.288 [2024-11-20 18:06:21.022118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e3d08 00:39:21.288 [2024-11-20 18:06:21.022875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.288 [2024-11-20 18:06:21.022891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.288 [2024-11-20 18:06:21.030549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e2c28 00:39:21.288 [2024-11-20 18:06:21.031311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.288 [2024-11-20 18:06:21.031327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.288 [2024-11-20 18:06:21.039003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e1b48 00:39:21.288 [2024-11-20 18:06:21.039716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.288 [2024-11-20 18:06:21.039732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.288 [2024-11-20 18:06:21.047455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f2d80 00:39:21.288 [2024-11-20 18:06:21.048199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.288 [2024-11-20 18:06:21.048215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.288 [2024-11-20 18:06:21.055890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f1ca0 00:39:21.288 [2024-11-20 18:06:21.056649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.288 [2024-11-20 18:06:21.056665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.288 [2024-11-20 18:06:21.064318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f0bc0 00:39:21.288 [2024-11-20 18:06:21.065053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.288 [2024-11-20 18:06:21.065069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.288 [2024-11-20 18:06:21.072746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198efae0 00:39:21.288 [2024-11-20 18:06:21.073514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.288 [2024-11-20 18:06:21.073531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.289 [2024-11-20 18:06:21.081183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e8d30 00:39:21.289 [2024-11-20 18:06:21.081953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.289 [2024-11-20 18:06:21.081968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.289 [2024-11-20 18:06:21.089622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198df550 00:39:21.289 [2024-11-20 18:06:21.090365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.289 [2024-11-20 18:06:21.090381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.289 [2024-11-20 18:06:21.098059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e0630 00:39:21.289 [2024-11-20 18:06:21.098812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.289 [2024-11-20 18:06:21.098831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.289 [2024-11-20 18:06:21.106511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fc560 00:39:21.289 [2024-11-20 18:06:21.107229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.289 [2024-11-20 18:06:21.107246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.289 [2024-11-20 18:06:21.114935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e73e0 00:39:21.289 [2024-11-20 18:06:21.115703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.289 [2024-11-20 18:06:21.115720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.289 [2024-11-20 18:06:21.123361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e84c0 00:39:21.289 [2024-11-20 18:06:21.124080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.289 [2024-11-20 18:06:21.124096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.289 [2024-11-20 18:06:21.131811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198feb58 00:39:21.289 [2024-11-20 18:06:21.132592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.289 [2024-11-20 18:06:21.132608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.289 [2024-11-20 18:06:21.140259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fe2e8 00:39:21.289 [2024-11-20 18:06:21.141022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.289 [2024-11-20 18:06:21.141038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.289 [2024-11-20 18:06:21.148695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fd208 00:39:21.289 [2024-11-20 18:06:21.149442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.289 [2024-11-20 18:06:21.149459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.289 [2024-11-20 18:06:21.157125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e4140 00:39:21.289 [2024-11-20 18:06:21.157834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.289 [2024-11-20 18:06:21.157851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.289 [2024-11-20 18:06:21.165558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e3060 00:39:21.289 [2024-11-20 18:06:21.166311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.289 [2024-11-20 18:06:21.166327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.289 [2024-11-20 18:06:21.173992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e1f80 00:39:21.289 [2024-11-20 18:06:21.174769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.289 [2024-11-20 18:06:21.174786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.289 [2024-11-20 18:06:21.182439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f8618 00:39:21.289 [2024-11-20 18:06:21.183197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.289 [2024-11-20 18:06:21.183213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.289 [2024-11-20 18:06:21.190885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f2948 00:39:21.289 [2024-11-20 18:06:21.191632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.289 [2024-11-20 18:06:21.191648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.289 [2024-11-20 18:06:21.199322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f1868 00:39:21.552 [2024-11-20 18:06:21.200069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.552 [2024-11-20 18:06:21.200085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.552 [2024-11-20 18:06:21.207744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f0788 00:39:21.552 [2024-11-20 18:06:21.208502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.552 [2024-11-20 18:06:21.208518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.552 [2024-11-20 18:06:21.216181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e0ea0 00:39:21.552 [2024-11-20 18:06:21.216927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.552 [2024-11-20 18:06:21.216943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.552 [2024-11-20 18:06:21.224641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198de8a8 00:39:21.552 [2024-11-20 18:06:21.225372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.552 [2024-11-20 18:06:21.225388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.552 [2024-11-20 18:06:21.233094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198df988 00:39:21.552 [2024-11-20 18:06:21.233844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.552 [2024-11-20 18:06:21.233861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.552 [2024-11-20 18:06:21.241556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e0a68 00:39:21.552 [2024-11-20 18:06:21.242319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.552 [2024-11-20 18:06:21.242335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.552 [2024-11-20 18:06:21.249995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fc998 00:39:21.552 [2024-11-20 18:06:21.250727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.552 [2024-11-20 18:06:21.250743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.552 [2024-11-20 18:06:21.258429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e7818 00:39:21.552 [2024-11-20 18:06:21.259174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.259191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.266866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e88f8 00:39:21.553 [2024-11-20 18:06:21.267631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.267646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.275369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ff3c8 00:39:21.553 [2024-11-20 18:06:21.276134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.276150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.283811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fe720 00:39:21.553 [2024-11-20 18:06:21.284584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.284600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.292233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f46d0 00:39:21.553 [2024-11-20 18:06:21.292983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.292998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.300652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e3d08 00:39:21.553 [2024-11-20 18:06:21.301434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.301450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.309077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e2c28 00:39:21.553 [2024-11-20 18:06:21.309832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.309849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.317526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e1b48 00:39:21.553 [2024-11-20 18:06:21.318283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.318302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.325965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f2d80 00:39:21.553 [2024-11-20 18:06:21.326729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.326745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.334409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f1ca0 00:39:21.553 [2024-11-20 18:06:21.335171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.335187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.342875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f0bc0 00:39:21.553 [2024-11-20 18:06:21.343648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.343664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.351298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198efae0 00:39:21.553 [2024-11-20 18:06:21.352049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.352065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.359739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e8d30 00:39:21.553 [2024-11-20 18:06:21.360508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.360524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.368188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198df550 00:39:21.553 [2024-11-20 18:06:21.368945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.368961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.376629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e0630 00:39:21.553 [2024-11-20 18:06:21.377362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.377378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.385052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fc560 00:39:21.553 [2024-11-20 18:06:21.385812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.385828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.393483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e73e0 00:39:21.553 [2024-11-20 18:06:21.394204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.394220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.401919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e84c0 00:39:21.553 [2024-11-20 18:06:21.402684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.402700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.410367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198feb58 00:39:21.553 [2024-11-20 18:06:21.411120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.411136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.418349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f5be8 00:39:21.553 [2024-11-20 18:06:21.419085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.419101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.427853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e5658 00:39:21.553 [2024-11-20 18:06:21.428722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.428738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.436293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e6738 00:39:21.553 [2024-11-20 18:06:21.437147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.437166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.444751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198de470 00:39:21.553 [2024-11-20 18:06:21.445617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.445634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.453209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e23b8 00:39:21.553 [2024-11-20 18:06:21.454064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.454080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.553 [2024-11-20 18:06:21.461649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f81e0 00:39:21.553 [2024-11-20 18:06:21.462515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.553 [2024-11-20 18:06:21.462531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.817 [2024-11-20 18:06:21.470094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f2510 00:39:21.817 [2024-11-20 18:06:21.470969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.817 [2024-11-20 18:06:21.470985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.817 [2024-11-20 18:06:21.478524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f1430 00:39:21.817 [2024-11-20 18:06:21.479365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.817 [2024-11-20 18:06:21.479381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.817 [2024-11-20 18:06:21.486949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f0350 00:39:21.817 [2024-11-20 18:06:21.487830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.817 [2024-11-20 18:06:21.487846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.817 [2024-11-20 18:06:21.495382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198eee38 00:39:21.817 [2024-11-20 18:06:21.496121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.817 [2024-11-20 18:06:21.496137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.817 [2024-11-20 18:06:21.503846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198edd58 00:39:21.817 [2024-11-20 18:06:21.504887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.817 [2024-11-20 18:06:21.504904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.817 [2024-11-20 18:06:21.512448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ecc78 00:39:21.817 [2024-11-20 18:06:21.513328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.817 [2024-11-20 18:06:21.513345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.817 [2024-11-20 18:06:21.520872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f8a50 00:39:21.817 [2024-11-20 18:06:21.521750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.817 [2024-11-20 18:06:21.521766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.817 [2024-11-20 18:06:21.529294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f6890 00:39:21.817 [2024-11-20 18:06:21.530182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.817 [2024-11-20 18:06:21.530198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.817 [2024-11-20 18:06:21.537720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f7970 00:39:21.817 [2024-11-20 18:06:21.538594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.817 [2024-11-20 18:06:21.538613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.817 [2024-11-20 18:06:21.546146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f57b0 00:39:21.817 [2024-11-20 18:06:21.547029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.817 [2024-11-20 18:06:21.547045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.817 [2024-11-20 18:06:21.554593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f35f0 00:39:21.817 [2024-11-20 18:06:21.555460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.817 [2024-11-20 18:06:21.555475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.817 [2024-11-20 18:06:21.563026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e49b0 00:39:21.817 [2024-11-20 18:06:21.563889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.817 [2024-11-20 18:06:21.563905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.571449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e5a90 00:39:21.818 [2024-11-20 18:06:21.572314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.572330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.579881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e6b70 00:39:21.818 [2024-11-20 18:06:21.580764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.580780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.588330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e27f0 00:39:21.818 [2024-11-20 18:06:21.589217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.589233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.596778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e1710 00:39:21.818 [2024-11-20 18:06:21.597620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.597636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.605232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f31b8 00:39:21.818 [2024-11-20 18:06:21.606108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.606123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.613664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f20d8 00:39:21.818 [2024-11-20 18:06:21.614712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.614732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:21.818 30055.00 IOPS, 117.40 MiB/s [2024-11-20T17:06:21.734Z] [2024-11-20 18:06:21.622083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fb048 00:39:21.818 [2024-11-20 18:06:21.622946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.622963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.630521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ed0b0 00:39:21.818 [2024-11-20 18:06:21.631370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.631386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.638969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f96f8 00:39:21.818 [2024-11-20 18:06:21.639811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.639827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.647422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f4f40 00:39:21.818 [2024-11-20 18:06:21.648303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.648320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.655872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ef6a8 00:39:21.818 [2024-11-20 18:06:21.656750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.656766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.664332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e6300 00:39:21.818 [2024-11-20 18:06:21.665192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.665208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.672752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e1f80 00:39:21.818 [2024-11-20 18:06:21.673631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.673648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.681205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f2948 00:39:21.818 [2024-11-20 18:06:21.682074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.682091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.689712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f0788 00:39:21.818 [2024-11-20 18:06:21.690579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.690595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.698177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ed920 00:39:21.818 [2024-11-20 18:06:21.699051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.699067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.706605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f8e88 00:39:21.818 [2024-11-20 18:06:21.707475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.707491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.715038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f7da8 00:39:21.818 [2024-11-20 18:06:21.715913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.715930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:21.818 [2024-11-20 18:06:21.723484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f57b0 00:39:21.818 [2024-11-20 18:06:21.724323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:21.818 [2024-11-20 18:06:21.724339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.731944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e49b0 00:39:22.081 [2024-11-20 18:06:21.732812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.732828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.740388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e6b70 00:39:22.081 [2024-11-20 18:06:21.741274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.741290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.748833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e1710 00:39:22.081 [2024-11-20 18:06:21.749675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.749691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.757263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f20d8 00:39:22.081 [2024-11-20 18:06:21.758140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.758162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.765709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fb048 00:39:22.081 [2024-11-20 18:06:21.766573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.766589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.774166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ed0b0 00:39:22.081 [2024-11-20 18:06:21.775034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.775050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.782620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f96f8 00:39:22.081 [2024-11-20 18:06:21.783505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.783522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.791061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f4f40 00:39:22.081 [2024-11-20 18:06:21.791932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.791949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.799506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ef6a8 00:39:22.081 [2024-11-20 18:06:21.800366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.800383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.807941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e6300 00:39:22.081 [2024-11-20 18:06:21.808812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.808828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.816376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e1f80 00:39:22.081 [2024-11-20 18:06:21.817253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.817270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.824815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f2948 00:39:22.081 [2024-11-20 18:06:21.825694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.825710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.833275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f0788 00:39:22.081 [2024-11-20 18:06:21.834164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.834180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.841719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ed920 00:39:22.081 [2024-11-20 18:06:21.842592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.842608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.850151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f8e88 00:39:22.081 [2024-11-20 18:06:21.851001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.851017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.858588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f7da8 00:39:22.081 [2024-11-20 18:06:21.859439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.859455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.867040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f57b0 00:39:22.081 [2024-11-20 18:06:21.867913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.867929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.875493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e49b0 00:39:22.081 [2024-11-20 18:06:21.876378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.876394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.883945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e6b70 00:39:22.081 [2024-11-20 18:06:21.884805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.081 [2024-11-20 18:06:21.884821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.081 [2024-11-20 18:06:21.892375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e1710 00:39:22.082 [2024-11-20 18:06:21.893238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.082 [2024-11-20 18:06:21.893254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.082 [2024-11-20 18:06:21.900811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f20d8 00:39:22.082 [2024-11-20 18:06:21.901666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.082 [2024-11-20 18:06:21.901683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:22.082 [2024-11-20 18:06:21.908686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fc128 00:39:22.082 [2024-11-20 18:06:21.909536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.082 [2024-11-20 18:06:21.909552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:39:22.082 [2024-11-20 18:06:21.918156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fd208 00:39:22.082 [2024-11-20 18:06:21.919163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.082 [2024-11-20 18:06:21.919179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.082 [2024-11-20 18:06:21.926611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fe2e8 00:39:22.082 [2024-11-20 18:06:21.927604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.082 [2024-11-20 18:06:21.927621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.082 [2024-11-20 18:06:21.935054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198feb58 00:39:22.082 [2024-11-20 18:06:21.936053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.082 [2024-11-20 18:06:21.936069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.082 [2024-11-20 18:06:21.943501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e9e10 00:39:22.082 [2024-11-20 18:06:21.944478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.082 [2024-11-20 18:06:21.944495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.082 [2024-11-20 18:06:21.951956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198eaef0 00:39:22.082 [2024-11-20 18:06:21.952943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.082 [2024-11-20 18:06:21.952960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.082 [2024-11-20 18:06:21.960415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f9b30 00:39:22.082 [2024-11-20 18:06:21.961399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.082 [2024-11-20 18:06:21.961415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.082 [2024-11-20 18:06:21.968861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f6458 00:39:22.082 [2024-11-20 18:06:21.969847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.082 [2024-11-20 18:06:21.969863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.082 [2024-11-20 18:06:21.977324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ef270 00:39:22.082 [2024-11-20 18:06:21.978262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.082 [2024-11-20 18:06:21.978282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.082 [2024-11-20 18:06:21.985760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e5658 00:39:22.082 [2024-11-20 18:06:21.986740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.082 [2024-11-20 18:06:21.986756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.342 [2024-11-20 18:06:21.994202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e6738 00:39:22.342 [2024-11-20 18:06:21.995145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.342 [2024-11-20 18:06:21.995165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.342 [2024-11-20 18:06:22.002648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198de470 00:39:22.342 [2024-11-20 18:06:22.003590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.342 [2024-11-20 18:06:22.003607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.342 [2024-11-20 18:06:22.011115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e23b8 00:39:22.342 [2024-11-20 18:06:22.012105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.342 [2024-11-20 18:06:22.012121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.342 [2024-11-20 18:06:22.019561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fbcf0 00:39:22.342 [2024-11-20 18:06:22.020546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.342 [2024-11-20 18:06:22.020562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.027997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fcdd0 00:39:22.343 [2024-11-20 18:06:22.028985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.029001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.036444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e7c50 00:39:22.343 [2024-11-20 18:06:22.037435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.037451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.044876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e38d0 00:39:22.343 [2024-11-20 18:06:22.045850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.045866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.053341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e4578 00:39:22.343 [2024-11-20 18:06:22.054353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.054369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.061797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fd640 00:39:22.343 [2024-11-20 18:06:22.062779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.062795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.070247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fdeb0 00:39:22.343 [2024-11-20 18:06:22.071228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.071245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.078679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fb8b8 00:39:22.343 [2024-11-20 18:06:22.079621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.079638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.087111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ea248 00:39:22.343 [2024-11-20 18:06:22.088098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.088114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.095566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198eb328 00:39:22.343 [2024-11-20 18:06:22.096561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.096577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.104008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f9f68 00:39:22.343 [2024-11-20 18:06:22.105005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.105022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.112465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f6020 00:39:22.343 [2024-11-20 18:06:22.113463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.113480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.120902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ef6a8 00:39:22.343 [2024-11-20 18:06:22.121892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.121908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.129342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e5220 00:39:22.343 [2024-11-20 18:06:22.130326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.130342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.137784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e6300 00:39:22.343 [2024-11-20 18:06:22.138777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.138794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.146252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198de038 00:39:22.343 [2024-11-20 18:06:22.147238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.147255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.154711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e0a68 00:39:22.343 [2024-11-20 18:06:22.155695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.155712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.163166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fc998 00:39:22.343 [2024-11-20 18:06:22.164155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.164174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.171597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e7818 00:39:22.343 [2024-11-20 18:06:22.172594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.172611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.180022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e88f8 00:39:22.343 [2024-11-20 18:06:22.180992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.181009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.188475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e4140 00:39:22.343 [2024-11-20 18:06:22.189468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.189484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.196938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fd208 00:39:22.343 [2024-11-20 18:06:22.197919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.197938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.205393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fe2e8 00:39:22.343 [2024-11-20 18:06:22.206391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.206407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.213825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198feb58 00:39:22.343 [2024-11-20 18:06:22.214807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.214824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.222279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e9e10 00:39:22.343 [2024-11-20 18:06:22.223234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.223250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.230719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198eaef0 00:39:22.343 [2024-11-20 18:06:22.231703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.231718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.239194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f9b30 00:39:22.343 [2024-11-20 18:06:22.240174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.343 [2024-11-20 18:06:22.240190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.343 [2024-11-20 18:06:22.247640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f6458 00:39:22.344 [2024-11-20 18:06:22.248637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.344 [2024-11-20 18:06:22.248653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.256086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ef270 00:39:22.606 [2024-11-20 18:06:22.257087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.257105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.264533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e5658 00:39:22.606 [2024-11-20 18:06:22.265524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.265540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.272958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e6738 00:39:22.606 [2024-11-20 18:06:22.273942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.273958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.281417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198de470 00:39:22.606 [2024-11-20 18:06:22.282361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.282378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.289876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e23b8 00:39:22.606 [2024-11-20 18:06:22.290838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.290855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.298331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fbcf0 00:39:22.606 [2024-11-20 18:06:22.299319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.299335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.306754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fcdd0 00:39:22.606 [2024-11-20 18:06:22.307740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.307757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.315209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e7c50 00:39:22.606 [2024-11-20 18:06:22.316204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.316221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.323641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e38d0 00:39:22.606 [2024-11-20 18:06:22.324634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.324650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.332098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e4578 00:39:22.606 [2024-11-20 18:06:22.333039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.333055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.340544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fd640 00:39:22.606 [2024-11-20 18:06:22.341532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.341549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.348999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fdeb0 00:39:22.606 [2024-11-20 18:06:22.349976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.349993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.357433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fb8b8 00:39:22.606 [2024-11-20 18:06:22.358412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.358429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.365858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ea248 00:39:22.606 [2024-11-20 18:06:22.366817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.366833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.374320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198eb328 00:39:22.606 [2024-11-20 18:06:22.375313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.375329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.382785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f9f68 00:39:22.606 [2024-11-20 18:06:22.383766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.383783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.391224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f6020 00:39:22.606 [2024-11-20 18:06:22.392196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.392212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.399648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ef6a8 00:39:22.606 [2024-11-20 18:06:22.400637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.606 [2024-11-20 18:06:22.400653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.606 [2024-11-20 18:06:22.408073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e5220 00:39:22.606 [2024-11-20 18:06:22.409062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.607 [2024-11-20 18:06:22.409078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.607 [2024-11-20 18:06:22.416510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e6300 00:39:22.607 [2024-11-20 18:06:22.417457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.607 [2024-11-20 18:06:22.417473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.607 [2024-11-20 18:06:22.425026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198de038 00:39:22.607 [2024-11-20 18:06:22.425996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.607 [2024-11-20 18:06:22.426012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.607 [2024-11-20 18:06:22.433489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e0a68 00:39:22.607 [2024-11-20 18:06:22.434461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.607 [2024-11-20 18:06:22.434477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.607 [2024-11-20 18:06:22.441923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fc998 00:39:22.607 [2024-11-20 18:06:22.442862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.607 [2024-11-20 18:06:22.442878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.607 [2024-11-20 18:06:22.450349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e7818 00:39:22.607 [2024-11-20 18:06:22.451341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.607 [2024-11-20 18:06:22.451356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.607 [2024-11-20 18:06:22.458774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e88f8 00:39:22.607 [2024-11-20 18:06:22.459757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.607 [2024-11-20 18:06:22.459772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.607 [2024-11-20 18:06:22.467213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e4140 00:39:22.607 [2024-11-20 18:06:22.468150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.607 [2024-11-20 18:06:22.468169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.607 [2024-11-20 18:06:22.475662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fd208 00:39:22.607 [2024-11-20 18:06:22.476656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.607 [2024-11-20 18:06:22.476672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.607 [2024-11-20 18:06:22.484097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fe2e8 00:39:22.607 [2024-11-20 18:06:22.485099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.607 [2024-11-20 18:06:22.485115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.607 [2024-11-20 18:06:22.492522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198feb58 00:39:22.607 [2024-11-20 18:06:22.493505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.607 [2024-11-20 18:06:22.493524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.607 [2024-11-20 18:06:22.500942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e9e10 00:39:22.607 [2024-11-20 18:06:22.501941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.607 [2024-11-20 18:06:22.501957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.607 [2024-11-20 18:06:22.509524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198eaef0 00:39:22.607 [2024-11-20 18:06:22.510506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.607 [2024-11-20 18:06:22.510522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.867 [2024-11-20 18:06:22.517976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f9b30 00:39:22.867 [2024-11-20 18:06:22.518959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.867 [2024-11-20 18:06:22.518975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.867 [2024-11-20 18:06:22.526723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f92c0 00:39:22.867 [2024-11-20 18:06:22.527789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.867 [2024-11-20 18:06:22.527805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:22.867 [2024-11-20 18:06:22.535308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f7100 00:39:22.867 [2024-11-20 18:06:22.536399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.867 [2024-11-20 18:06:22.536415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:39:22.867 [2024-11-20 18:06:22.543732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f5be8 00:39:22.867 [2024-11-20 18:06:22.544789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.867 [2024-11-20 18:06:22.544805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:39:22.867 [2024-11-20 18:06:22.552154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198fa7d8 00:39:22.867 [2024-11-20 18:06:22.553223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.867 [2024-11-20 18:06:22.553240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:39:22.867 [2024-11-20 18:06:22.560603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f7da8 00:39:22.867 [2024-11-20 18:06:22.561693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.867 [2024-11-20 18:06:22.561710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:39:22.867 [2024-11-20 18:06:22.569053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f6cc8 00:39:22.867 [2024-11-20 18:06:22.570170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.867 [2024-11-20 18:06:22.570186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:39:22.867 [2024-11-20 18:06:22.577494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198f8e88 00:39:22.867 [2024-11-20 18:06:22.578588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.867 [2024-11-20 18:06:22.578604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:39:22.867 [2024-11-20 18:06:22.585924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198ec840 00:39:22.867 [2024-11-20 18:06:22.587019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.867 [2024-11-20 18:06:22.587035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:39:22.867 [2024-11-20 18:06:22.594350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198e0ea0 00:39:22.867 [2024-11-20 18:06:22.595417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.867 [2024-11-20 18:06:22.595433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:39:22.867 [2024-11-20 18:06:22.602789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198de8a8 00:39:22.867 [2024-11-20 18:06:22.603892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.867 [2024-11-20 18:06:22.603909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:39:22.867 [2024-11-20 18:06:22.611231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c0f70) with pdu=0x2000198df988 00:39:22.867 [2024-11-20 18:06:22.612335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:22.867 [2024-11-20 18:06:22.612351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:39:22.867 30152.00 IOPS, 117.78 MiB/s 00:39:22.867 Latency(us) 00:39:22.867 [2024-11-20T17:06:22.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:22.867 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:22.867 nvme0n1 : 2.00 30162.40 117.82 0.00 0.00 4238.51 2061.65 11359.57 00:39:22.867 [2024-11-20T17:06:22.783Z] =================================================================================================================== 00:39:22.867 [2024-11-20T17:06:22.783Z] Total : 30162.40 117.82 0.00 0.00 4238.51 2061.65 11359.57 00:39:22.867 { 00:39:22.867 "results": [ 00:39:22.867 { 00:39:22.867 "job": "nvme0n1", 00:39:22.867 "core_mask": "0x2", 00:39:22.867 "workload": "randwrite", 00:39:22.867 "status": "finished", 00:39:22.867 "queue_depth": 128, 00:39:22.867 "io_size": 4096, 00:39:22.867 "runtime": 2.003554, 00:39:22.867 "iops": 30162.40141268965, 00:39:22.867 "mibps": 117.82188051831895, 00:39:22.867 "io_failed": 0, 00:39:22.867 "io_timeout": 0, 00:39:22.867 "avg_latency_us": 4238.508135204306, 00:39:22.867 "min_latency_us": 2061.653333333333, 00:39:22.867 "max_latency_us": 11359.573333333334 00:39:22.867 } 00:39:22.867 ], 00:39:22.867 "core_count": 1 00:39:22.867 } 00:39:22.868 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:22.868 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:22.868 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:22.868 | .driver_specific 00:39:22.868 | .nvme_error 00:39:22.868 | .status_code 00:39:22.868 | .command_transient_transport_error' 00:39:22.868 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2930473 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2930473 ']' 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2930473 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2930473 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2930473' 00:39:23.127 killing process with pid 2930473 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2930473 00:39:23.127 Received shutdown signal, test time was about 2.000000 seconds 00:39:23.127 00:39:23.127 Latency(us) 00:39:23.127 [2024-11-20T17:06:23.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:23.127 [2024-11-20T17:06:23.043Z] =================================================================================================================== 00:39:23.127 [2024-11-20T17:06:23.043Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2930473 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2931125 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2931125 /var/tmp/bperf.sock 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2931125 ']' 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:23.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:23.127 18:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:23.387 [2024-11-20 18:06:23.042030] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:39:23.387 [2024-11-20 18:06:23.042088] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2931125 ] 00:39:23.387 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:23.387 Zero copy mechanism will not be used. 00:39:23.387 [2024-11-20 18:06:23.118069] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:23.387 [2024-11-20 18:06:23.146068] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:23.387 18:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:23.387 18:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:39:23.387 18:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:23.387 18:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:23.646 18:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:23.647 18:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:23.647 18:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:23.647 18:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:23.647 18:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:23.647 18:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:23.907 nvme0n1 00:39:23.907 18:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:39:23.907 18:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:23.907 18:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:23.907 18:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:23.907 18:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:23.907 18:06:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:24.170 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:24.170 Zero copy mechanism will not be used. 00:39:24.170 Running I/O for 2 seconds... 00:39:24.170 [2024-11-20 18:06:23.852515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.852838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.852864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.859089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.859291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.859317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.863641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.863834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.863852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.869562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.869754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.869771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.872935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.873125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.873142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.876678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.876867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.876884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.880479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.880668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.880686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.884339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.884528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.884545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.888446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.888633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.888649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.892310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.892496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.892513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.895924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.896119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.896136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.899568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.899754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.899770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.903763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.903952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.903969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.907565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.907695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.907711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.912773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.912960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.912979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.917854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.918042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.918059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.922933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.923248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.923266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.930894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.931204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.931222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.938374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.938701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.938718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.945035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.945341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.945358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.952315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.952506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.952524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.955974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.956168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.956185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.959843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.960030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.960046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.170 [2024-11-20 18:06:23.963580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.170 [2024-11-20 18:06:23.963636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.170 [2024-11-20 18:06:23.963651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:23.967203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:23.967392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:23.967408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:23.971010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:23.971205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:23.971221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:23.974892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:23.975092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:23.975109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:23.981518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:23.981811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:23.981834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:23.986394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:23.986712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:23.986730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:23.994811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:23.995104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:23.995122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:24.001733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:24.001924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:24.001941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:24.008182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:24.008373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:24.008390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:24.011893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:24.012091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:24.012108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:24.016270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:24.016470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:24.016486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:24.023759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:24.023977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:24.023994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:24.030555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:24.030745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:24.030761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:24.035483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:24.035677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:24.035695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:24.041659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:24.041876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:24.041893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:24.048910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:24.049210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:24.049228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:24.056644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:24.056840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:24.056857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:24.062245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:24.062434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:24.062451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:24.068854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:24.069185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:24.069203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.171 [2024-11-20 18:06:24.076588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.171 [2024-11-20 18:06:24.076790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.171 [2024-11-20 18:06:24.076807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.431 [2024-11-20 18:06:24.081962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.431 [2024-11-20 18:06:24.082155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.431 [2024-11-20 18:06:24.082178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.431 [2024-11-20 18:06:24.088243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.431 [2024-11-20 18:06:24.088565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.431 [2024-11-20 18:06:24.088583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.431 [2024-11-20 18:06:24.092167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.431 [2024-11-20 18:06:24.092355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.431 [2024-11-20 18:06:24.092373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.431 [2024-11-20 18:06:24.096049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.431 [2024-11-20 18:06:24.096242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.431 [2024-11-20 18:06:24.096259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.431 [2024-11-20 18:06:24.099657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.431 [2024-11-20 18:06:24.099702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.431 [2024-11-20 18:06:24.099718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.431 [2024-11-20 18:06:24.105911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.431 [2024-11-20 18:06:24.106224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.431 [2024-11-20 18:06:24.106241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.431 [2024-11-20 18:06:24.110269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.431 [2024-11-20 18:06:24.110464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.431 [2024-11-20 18:06:24.110480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.431 [2024-11-20 18:06:24.114113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.431 [2024-11-20 18:06:24.114315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.431 [2024-11-20 18:06:24.114332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.431 [2024-11-20 18:06:24.117967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.431 [2024-11-20 18:06:24.118167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.431 [2024-11-20 18:06:24.118185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.431 [2024-11-20 18:06:24.122092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.122288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.122304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.125605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.125791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.125811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.132129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.132414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.132432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.140486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.140779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.140797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.145502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.145790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.145807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.150034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.150354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.150372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.154203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.154499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.154517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.160496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.160801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.160819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.166593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.166897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.166914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.170340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.170538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.170555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.173994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.174188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.174205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.181117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.181311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.181335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.185346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.185535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.185552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.192366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.192553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.192570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.202144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.202465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.202483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.212647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.213000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.213017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.223307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.223560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.223584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.234460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.234676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.234693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.245448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.245789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.245809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.255837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.256075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.256091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.266280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.266486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.266503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.275971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.276299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.276316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.283998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.284298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.284315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.292310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.292365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.292381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.300354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.300543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.300559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.308444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.308749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.308766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.314209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.314497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.314515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.322074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.322295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.322312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.332085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.432 [2024-11-20 18:06:24.332326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.432 [2024-11-20 18:06:24.332342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.432 [2024-11-20 18:06:24.343758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.343975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.343992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.355055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.355264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.355281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.365289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.365614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.365631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.370882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.371254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.371272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.378733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.379095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.379112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.383811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.384001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.384017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.392187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.392609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.392628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.399741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.399939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.399955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.405757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.405956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.405973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.411840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.412028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.412045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.422505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.422890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.422908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.431710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.432019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.432037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.439577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.439901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.439919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.448131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.448532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.448550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.454459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.454691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.454707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.461086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.461368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.461389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.470362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.470658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.470675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.478395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.478689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.478706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.485206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.485406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.485423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.493930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.494107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.494123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.503330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.503651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.503715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.511927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.512200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.512217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.521474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.521812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.521830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.531258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.531575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.531592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.541027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.541288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.541305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.546763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.693 [2024-11-20 18:06:24.546942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.693 [2024-11-20 18:06:24.546959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.693 [2024-11-20 18:06:24.554361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.694 [2024-11-20 18:06:24.554537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.694 [2024-11-20 18:06:24.554554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.694 [2024-11-20 18:06:24.562670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.694 [2024-11-20 18:06:24.562867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.694 [2024-11-20 18:06:24.562883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.694 [2024-11-20 18:06:24.572812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.694 [2024-11-20 18:06:24.573003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.694 [2024-11-20 18:06:24.573020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.694 [2024-11-20 18:06:24.579970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.694 [2024-11-20 18:06:24.580146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.694 [2024-11-20 18:06:24.580167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.694 [2024-11-20 18:06:24.588779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.694 [2024-11-20 18:06:24.589086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.694 [2024-11-20 18:06:24.589103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.694 [2024-11-20 18:06:24.596108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.694 [2024-11-20 18:06:24.596391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.694 [2024-11-20 18:06:24.596409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.955 [2024-11-20 18:06:24.605420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.955 [2024-11-20 18:06:24.605702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.955 [2024-11-20 18:06:24.605720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.955 [2024-11-20 18:06:24.614740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.955 [2024-11-20 18:06:24.615020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.955 [2024-11-20 18:06:24.615037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.955 [2024-11-20 18:06:24.625206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.955 [2024-11-20 18:06:24.625440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.955 [2024-11-20 18:06:24.625457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.955 [2024-11-20 18:06:24.635318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.955 [2024-11-20 18:06:24.635621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.955 [2024-11-20 18:06:24.635637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.955 [2024-11-20 18:06:24.644211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.955 [2024-11-20 18:06:24.644508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.955 [2024-11-20 18:06:24.644525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.955 [2024-11-20 18:06:24.652032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.955 [2024-11-20 18:06:24.652205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.955 [2024-11-20 18:06:24.652221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.955 [2024-11-20 18:06:24.659917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.955 [2024-11-20 18:06:24.660259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.955 [2024-11-20 18:06:24.660277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.955 [2024-11-20 18:06:24.665695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.955 [2024-11-20 18:06:24.665763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.955 [2024-11-20 18:06:24.665779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.955 [2024-11-20 18:06:24.670695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.955 [2024-11-20 18:06:24.670778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.955 [2024-11-20 18:06:24.670794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.955 [2024-11-20 18:06:24.675667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.955 [2024-11-20 18:06:24.675711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.955 [2024-11-20 18:06:24.675729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.955 [2024-11-20 18:06:24.681722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.955 [2024-11-20 18:06:24.681796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.955 [2024-11-20 18:06:24.681812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.955 [2024-11-20 18:06:24.688887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.955 [2024-11-20 18:06:24.689003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.955 [2024-11-20 18:06:24.689018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.955 [2024-11-20 18:06:24.697314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.955 [2024-11-20 18:06:24.697373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.955 [2024-11-20 18:06:24.697388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.706471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.706522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.706537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.714175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.714226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.714241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.723096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.723145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.723165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.729686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.729753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.729768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.737237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.737532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.737548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.742206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.742252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.742268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.750202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.750344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.750359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.757450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.757759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.757774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.764157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.764209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.764224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.770844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.770902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.770918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.778436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.778507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.778523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.786210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.786523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.786539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.791924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.792205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.792220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.799659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.799731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.799747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.807775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.807850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.807865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.814841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.814906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.814921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.823453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.823499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.823514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.830705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.830748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.830764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.837071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.837192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.837208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:24.956 4514.00 IOPS, 564.25 MiB/s [2024-11-20T17:06:24.872Z] [2024-11-20 18:06:24.847013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.847077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.956 [2024-11-20 18:06:24.847092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:24.956 [2024-11-20 18:06:24.856268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:24.956 [2024-11-20 18:06:24.856497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:24.957 [2024-11-20 18:06:24.856513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:24.957 [2024-11-20 18:06:24.866901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:24.867111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:24.867128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:24.877485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:24.877811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:24.877829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:24.888144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:24.888388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:24.888404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:24.898382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:24.898663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:24.898679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:24.908409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:24.908645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:24.908661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:24.918936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:24.919058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:24.919073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:24.929551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:24.929770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:24.929785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:24.939964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:24.940218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:24.940234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:24.949858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:24.950207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:24.950223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:24.960282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:24.960527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:24.960542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:24.970701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:24.970933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:24.970950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:24.980897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:24.981153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:24.981173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:24.991003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:24.991250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:24.991265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:25.001445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:25.001686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:25.001702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:25.012155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:25.012438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:25.012455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:25.022777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:25.023037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:25.023054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:25.033171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:25.033305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:25.033321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:25.042624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.218 [2024-11-20 18:06:25.042939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.218 [2024-11-20 18:06:25.042955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.218 [2024-11-20 18:06:25.052685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.052978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.052995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.062699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.062904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.062920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.073916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.074166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.074182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.083628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.083922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.083938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.087665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.087714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.087729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.090464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.090517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.090533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.093170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.093226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.093241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.095879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.095933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.095948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.098695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.098741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.098757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.101412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.101475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.101493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.104048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.104092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.104108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.106575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.106624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.106639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.109049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.109101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.109117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.111622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.111673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.111689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.114358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.114414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.114430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.116821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.116886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.116901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.120007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.120102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.120118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.123129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.123208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.123223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.219 [2024-11-20 18:06:25.126426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.219 [2024-11-20 18:06:25.126519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.219 [2024-11-20 18:06:25.126534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.480 [2024-11-20 18:06:25.135678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.480 [2024-11-20 18:06:25.135978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.480 [2024-11-20 18:06:25.135994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.480 [2024-11-20 18:06:25.146084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.480 [2024-11-20 18:06:25.146372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.480 [2024-11-20 18:06:25.146388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.480 [2024-11-20 18:06:25.156668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.480 [2024-11-20 18:06:25.156953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.480 [2024-11-20 18:06:25.156968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.480 [2024-11-20 18:06:25.167737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.480 [2024-11-20 18:06:25.168014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.480 [2024-11-20 18:06:25.168029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.480 [2024-11-20 18:06:25.178735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.480 [2024-11-20 18:06:25.178976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.480 [2024-11-20 18:06:25.178992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.480 [2024-11-20 18:06:25.188813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.480 [2024-11-20 18:06:25.189040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.480 [2024-11-20 18:06:25.189055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.480 [2024-11-20 18:06:25.199435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.480 [2024-11-20 18:06:25.199697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.480 [2024-11-20 18:06:25.199713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.480 [2024-11-20 18:06:25.209866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.480 [2024-11-20 18:06:25.210147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.480 [2024-11-20 18:06:25.210168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.480 [2024-11-20 18:06:25.219909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.220225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.220241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.230364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.230643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.230658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.241205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.241455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.241471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.251762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.252077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.252093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.262038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.262108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.262124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.272131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.272378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.272394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.282691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.283032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.283047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.292152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.292368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.292392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.302653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.302782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.302800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.311340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.311566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.311582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.321096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.321292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.321307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.331474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.331707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.331722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.341536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.341811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.341828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.351837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.352055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.352071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.362183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.362490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.362505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.370621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.370677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.370692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.373446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.373526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.373542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.376185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.376237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.376253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.378857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.378914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.378929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.381552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.381622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.381638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.384852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.384903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.384919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.481 [2024-11-20 18:06:25.389753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.481 [2024-11-20 18:06:25.390122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.481 [2024-11-20 18:06:25.390138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.397845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.398037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.398053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.405628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.405851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.405868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.414713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.414851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.414867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.421740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.421785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.421801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.429960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.430023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.430039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.439198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.439479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.439494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.449091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.449385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.449401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.459221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.459566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.459581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.469657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.469957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.469973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.480025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.480347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.480364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.490745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.490945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.490960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.501678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.502031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.502047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.512476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.512760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.512780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.521499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.521559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.521574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.529577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.529806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.529822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.532972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.533028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.533044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.535673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.535725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.535741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.538398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.538461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.538477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.541066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.541122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.541138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.543708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.743 [2024-11-20 18:06:25.543757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.743 [2024-11-20 18:06:25.543773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.743 [2024-11-20 18:06:25.546458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.546529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.546545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.549128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.549184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.549200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.551775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.551829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.551845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.554251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.554315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.554330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.556721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.556775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.556791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.559199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.559247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.559263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.561672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.561727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.561743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.564175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.564241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.564256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.567300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.567359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.567374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.572203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.572253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.572269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.578454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.578512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.578528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.584096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.584153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.584174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.587996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.588049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.588064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.590795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.590861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.590877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.593649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.593720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.593735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.596407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.596459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.596475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.599248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.599292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.599307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.604605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.604679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.604694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.607513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.607562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.607581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.610189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.610272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.610288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.614128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.614374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.614390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.624611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.624848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.624865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.634332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.744 [2024-11-20 18:06:25.634563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.744 [2024-11-20 18:06:25.634580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:25.744 [2024-11-20 18:06:25.645094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:25.745 [2024-11-20 18:06:25.645289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:25.745 [2024-11-20 18:06:25.645305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:25.745 [2024-11-20 18:06:25.654710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.006 [2024-11-20 18:06:25.655037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.006 [2024-11-20 18:06:25.655055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:26.006 [2024-11-20 18:06:25.663509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.006 [2024-11-20 18:06:25.663568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.006 [2024-11-20 18:06:25.663583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:26.006 [2024-11-20 18:06:25.672419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.006 [2024-11-20 18:06:25.672645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.006 [2024-11-20 18:06:25.672660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:26.006 [2024-11-20 18:06:25.679830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.006 [2024-11-20 18:06:25.679887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.006 [2024-11-20 18:06:25.679903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:26.006 [2024-11-20 18:06:25.682844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.682903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.682919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.685485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.685536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.685552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.688132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.688192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.688207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.690783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.690838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.690854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.693310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.693354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.693369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.695854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.695909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.695924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.698440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.698484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.698500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.702227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.702311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.702327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.707594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.707684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.707700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.710136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.710219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.710235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.712604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.712683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.712699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.715150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.715239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.715254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.717845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.717914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.717929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.720364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.720445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.720461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.723190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.723263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.723279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.728652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.728731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.728747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.735936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.736232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.736252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.744248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.744549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.744566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.751237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.751510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.751527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.760539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.760628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.760643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.769268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.769507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.007 [2024-11-20 18:06:25.769523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:26.007 [2024-11-20 18:06:25.777259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.007 [2024-11-20 18:06:25.777331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.008 [2024-11-20 18:06:25.777346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:26.008 [2024-11-20 18:06:25.786417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.008 [2024-11-20 18:06:25.786476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.008 [2024-11-20 18:06:25.786491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:26.008 [2024-11-20 18:06:25.794111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.008 [2024-11-20 18:06:25.794183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.008 [2024-11-20 18:06:25.794199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:26.008 [2024-11-20 18:06:25.802717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.008 [2024-11-20 18:06:25.802992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.008 [2024-11-20 18:06:25.803009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:26.008 [2024-11-20 18:06:25.811007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.008 [2024-11-20 18:06:25.811064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.008 [2024-11-20 18:06:25.811079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:26.008 [2024-11-20 18:06:25.818894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.008 [2024-11-20 18:06:25.818966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.008 [2024-11-20 18:06:25.818981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:26.008 [2024-11-20 18:06:25.825217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.008 [2024-11-20 18:06:25.825374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.008 [2024-11-20 18:06:25.825390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:26.008 [2024-11-20 18:06:25.833256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.008 [2024-11-20 18:06:25.833320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.008 [2024-11-20 18:06:25.833335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:26.008 [2024-11-20 18:06:25.839107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.008 [2024-11-20 18:06:25.839387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.008 [2024-11-20 18:06:25.839403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:26.008 4537.00 IOPS, 567.12 MiB/s [2024-11-20T17:06:25.924Z] [2024-11-20 18:06:25.848605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20c1450) with pdu=0x2000198fef90 00:39:26.008 [2024-11-20 18:06:25.848888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:26.008 [2024-11-20 18:06:25.848903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:26.008 00:39:26.008 Latency(us) 00:39:26.008 [2024-11-20T17:06:25.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:26.008 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:39:26.008 nvme0n1 : 2.01 4533.62 566.70 0.00 0.00 3523.13 1092.27 11414.19 00:39:26.008 [2024-11-20T17:06:25.924Z] =================================================================================================================== 00:39:26.008 [2024-11-20T17:06:25.924Z] Total : 4533.62 566.70 0.00 0.00 3523.13 1092.27 11414.19 00:39:26.008 { 00:39:26.008 "results": [ 00:39:26.008 { 00:39:26.008 "job": "nvme0n1", 00:39:26.008 "core_mask": "0x2", 00:39:26.008 "workload": "randwrite", 00:39:26.008 "status": "finished", 00:39:26.008 "queue_depth": 16, 00:39:26.008 "io_size": 131072, 00:39:26.008 "runtime": 2.00502, 00:39:26.008 "iops": 4533.620612263219, 00:39:26.008 "mibps": 566.7025765329024, 00:39:26.008 "io_failed": 0, 00:39:26.008 "io_timeout": 0, 00:39:26.008 "avg_latency_us": 3523.1318929226254, 00:39:26.008 "min_latency_us": 1092.2666666666667, 00:39:26.008 "max_latency_us": 11414.186666666666 00:39:26.008 } 00:39:26.008 ], 00:39:26.008 "core_count": 1 00:39:26.008 } 00:39:26.008 18:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:26.008 18:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:26.008 18:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:26.008 | .driver_specific 00:39:26.008 | .nvme_error 00:39:26.008 | .status_code 00:39:26.008 | .command_transient_transport_error' 00:39:26.008 18:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:26.268 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 293 > 0 )) 00:39:26.268 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2931125 00:39:26.268 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2931125 ']' 00:39:26.268 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2931125 00:39:26.268 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:39:26.268 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:26.268 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2931125 00:39:26.268 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:26.268 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:26.268 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2931125' 00:39:26.268 killing process with pid 2931125 00:39:26.268 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2931125 00:39:26.268 Received shutdown signal, test time was about 2.000000 seconds 00:39:26.268 00:39:26.268 Latency(us) 00:39:26.268 [2024-11-20T17:06:26.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:26.268 [2024-11-20T17:06:26.184Z] =================================================================================================================== 00:39:26.268 [2024-11-20T17:06:26.184Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:26.268 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2931125 00:39:26.528 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2929095 00:39:26.528 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2929095 ']' 00:39:26.528 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2929095 00:39:26.528 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:39:26.528 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:26.528 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2929095 00:39:26.528 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:26.528 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:26.528 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2929095' 00:39:26.528 killing process with pid 2929095 00:39:26.528 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2929095 00:39:26.528 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2929095 00:39:26.528 00:39:26.528 real 0m14.048s 00:39:26.528 user 0m27.266s 00:39:26.528 sys 0m3.471s 00:39:26.528 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:26.528 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:26.528 ************************************ 00:39:26.528 END TEST nvmf_digest_error 00:39:26.528 ************************************ 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:26.789 rmmod nvme_tcp 00:39:26.789 rmmod nvme_fabrics 00:39:26.789 rmmod nvme_keyring 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 2929095 ']' 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 2929095 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 2929095 ']' 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 2929095 00:39:26.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2929095) - No such process 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 2929095 is not found' 00:39:26.789 Process with pid 2929095 is not found 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:26.789 18:06:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:28.696 18:06:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:28.696 00:39:28.696 real 0m38.871s 00:39:28.696 user 0m58.345s 00:39:28.696 sys 0m12.863s 00:39:28.696 18:06:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:28.696 18:06:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:39:28.696 ************************************ 00:39:28.696 END TEST nvmf_digest 00:39:28.696 ************************************ 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.957 ************************************ 00:39:28.957 START TEST nvmf_bdevperf 00:39:28.957 ************************************ 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:39:28.957 * Looking for test storage... 00:39:28.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:28.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.957 --rc genhtml_branch_coverage=1 00:39:28.957 --rc genhtml_function_coverage=1 00:39:28.957 --rc genhtml_legend=1 00:39:28.957 --rc geninfo_all_blocks=1 00:39:28.957 --rc geninfo_unexecuted_blocks=1 00:39:28.957 00:39:28.957 ' 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:28.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.957 --rc genhtml_branch_coverage=1 00:39:28.957 --rc genhtml_function_coverage=1 00:39:28.957 --rc genhtml_legend=1 00:39:28.957 --rc geninfo_all_blocks=1 00:39:28.957 --rc geninfo_unexecuted_blocks=1 00:39:28.957 00:39:28.957 ' 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:28.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.957 --rc genhtml_branch_coverage=1 00:39:28.957 --rc genhtml_function_coverage=1 00:39:28.957 --rc genhtml_legend=1 00:39:28.957 --rc geninfo_all_blocks=1 00:39:28.957 --rc geninfo_unexecuted_blocks=1 00:39:28.957 00:39:28.957 ' 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:28.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.957 --rc genhtml_branch_coverage=1 00:39:28.957 --rc genhtml_function_coverage=1 00:39:28.957 --rc genhtml_legend=1 00:39:28.957 --rc geninfo_all_blocks=1 00:39:28.957 --rc geninfo_unexecuted_blocks=1 00:39:28.957 00:39:28.957 ' 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:28.957 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:29.223 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:29.223 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:29.223 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:29.223 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:29.223 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:29.223 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:29.223 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:29.223 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:39:29.223 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:29.223 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:29.223 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:29.223 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.223 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.223 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.223 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:29.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:39:29.224 18:06:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:35.898 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:35.898 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:36.160 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:36.160 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:36.160 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # is_hw=yes 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:36.160 18:06:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:36.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:36.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:39:36.422 00:39:36.422 --- 10.0.0.2 ping statistics --- 00:39:36.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:36.422 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:36.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:36.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:39:36.422 00:39:36.422 --- 10.0.0.1 ping statistics --- 00:39:36.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:36.422 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # return 0 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=2935802 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 2935802 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2935802 ']' 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:36.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:36.422 18:06:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:36.422 [2024-11-20 18:06:36.232824] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:39:36.422 [2024-11-20 18:06:36.232886] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:36.422 [2024-11-20 18:06:36.321449] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:36.683 [2024-11-20 18:06:36.370663] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:36.683 [2024-11-20 18:06:36.370722] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:36.683 [2024-11-20 18:06:36.370731] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:36.683 [2024-11-20 18:06:36.370738] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:36.683 [2024-11-20 18:06:36.370744] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:36.683 [2024-11-20 18:06:36.370906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:36.683 [2024-11-20 18:06:36.371062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:36.683 [2024-11-20 18:06:36.371062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:39:37.255 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:37.255 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:39:37.255 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:37.255 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:37.255 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:37.255 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:37.255 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:37.255 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.255 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:37.255 [2024-11-20 18:06:37.076321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:37.255 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.255 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:37.255 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:37.256 Malloc0 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:37.256 [2024-11-20 18:06:37.142037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:39:37.256 { 00:39:37.256 "params": { 00:39:37.256 "name": "Nvme$subsystem", 00:39:37.256 "trtype": "$TEST_TRANSPORT", 00:39:37.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:37.256 "adrfam": "ipv4", 00:39:37.256 "trsvcid": "$NVMF_PORT", 00:39:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:37.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:37.256 "hdgst": ${hdgst:-false}, 00:39:37.256 "ddgst": ${ddgst:-false} 00:39:37.256 }, 00:39:37.256 "method": "bdev_nvme_attach_controller" 00:39:37.256 } 00:39:37.256 EOF 00:39:37.256 )") 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:39:37.256 18:06:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:39:37.256 "params": { 00:39:37.256 "name": "Nvme1", 00:39:37.256 "trtype": "tcp", 00:39:37.256 "traddr": "10.0.0.2", 00:39:37.256 "adrfam": "ipv4", 00:39:37.256 "trsvcid": "4420", 00:39:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:37.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:37.256 "hdgst": false, 00:39:37.256 "ddgst": false 00:39:37.256 }, 00:39:37.256 "method": "bdev_nvme_attach_controller" 00:39:37.256 }' 00:39:37.517 [2024-11-20 18:06:37.195531] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:39:37.517 [2024-11-20 18:06:37.195579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2936135 ] 00:39:37.517 [2024-11-20 18:06:37.269949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.517 [2024-11-20 18:06:37.301949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:37.777 Running I/O for 1 seconds... 00:39:38.719 8606.00 IOPS, 33.62 MiB/s 00:39:38.719 Latency(us) 00:39:38.719 [2024-11-20T17:06:38.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:38.719 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:38.719 Verification LBA range: start 0x0 length 0x4000 00:39:38.719 Nvme1n1 : 1.01 8657.12 33.82 0.00 0.00 14719.99 1256.11 15619.41 00:39:38.719 [2024-11-20T17:06:38.635Z] =================================================================================================================== 00:39:38.719 [2024-11-20T17:06:38.635Z] Total : 8657.12 33.82 0.00 0.00 14719.99 1256.11 15619.41 00:39:38.980 18:06:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2936467 00:39:38.980 18:06:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:39:38.980 18:06:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:39:38.980 18:06:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:39:38.980 18:06:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:39:38.980 18:06:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:39:38.980 18:06:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:39:38.980 18:06:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:39:38.980 { 00:39:38.980 "params": { 00:39:38.980 "name": "Nvme$subsystem", 00:39:38.980 "trtype": "$TEST_TRANSPORT", 00:39:38.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:38.980 "adrfam": "ipv4", 00:39:38.980 "trsvcid": "$NVMF_PORT", 00:39:38.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:38.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:38.980 "hdgst": ${hdgst:-false}, 00:39:38.980 "ddgst": ${ddgst:-false} 00:39:38.980 }, 00:39:38.980 "method": "bdev_nvme_attach_controller" 00:39:38.980 } 00:39:38.980 EOF 00:39:38.980 )") 00:39:38.980 18:06:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:39:38.980 18:06:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:39:38.980 18:06:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:39:38.980 18:06:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:39:38.980 "params": { 00:39:38.980 "name": "Nvme1", 00:39:38.980 "trtype": "tcp", 00:39:38.980 "traddr": "10.0.0.2", 00:39:38.980 "adrfam": "ipv4", 00:39:38.980 "trsvcid": "4420", 00:39:38.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:38.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:38.980 "hdgst": false, 00:39:38.980 "ddgst": false 00:39:38.980 }, 00:39:38.980 "method": "bdev_nvme_attach_controller" 00:39:38.980 }' 00:39:38.980 [2024-11-20 18:06:38.820561] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:39:38.980 [2024-11-20 18:06:38.820621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2936467 ] 00:39:39.240 [2024-11-20 18:06:38.902693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.240 [2024-11-20 18:06:38.948741] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:39.240 Running I/O for 15 seconds... 00:39:41.567 10906.00 IOPS, 42.60 MiB/s [2024-11-20T17:06:42.059Z] 11026.00 IOPS, 43.07 MiB/s [2024-11-20T17:06:42.059Z] 18:06:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2935802 00:39:42.143 18:06:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:39:42.143 [2024-11-20 18:06:41.785247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.143 [2024-11-20 18:06:41.785290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.143 [2024-11-20 18:06:41.785310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.143 [2024-11-20 18:06:41.785320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.143 [2024-11-20 18:06:41.785332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.143 [2024-11-20 18:06:41.785341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.143 [2024-11-20 18:06:41.785352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.143 [2024-11-20 18:06:41.785362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.143 [2024-11-20 18:06:41.785375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.143 [2024-11-20 18:06:41.785382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.143 [2024-11-20 18:06:41.785392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.143 [2024-11-20 18:06:41.785400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.143 [2024-11-20 18:06:41.785410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.143 [2024-11-20 18:06:41.785420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.143 [2024-11-20 18:06:41.785436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.143 [2024-11-20 18:06:41.785446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.143 [2024-11-20 18:06:41.785457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.143 [2024-11-20 18:06:41.785465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.143 [2024-11-20 18:06:41.785476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.143 [2024-11-20 18:06:41.785484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.143 [2024-11-20 18:06:41.785496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.143 [2024-11-20 18:06:41.785503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.143 [2024-11-20 18:06:41.785514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.143 [2024-11-20 18:06:41.785526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.143 [2024-11-20 18:06:41.785538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.143 [2024-11-20 18:06:41.785548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.143 [2024-11-20 18:06:41.785560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.143 [2024-11-20 18:06:41.785572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.143 [2024-11-20 18:06:41.785587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.143 [2024-11-20 18:06:41.785599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.143 [2024-11-20 18:06:41.785612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.143 [2024-11-20 18:06:41.785622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.143 [2024-11-20 18:06:41.785635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.143 [2024-11-20 18:06:41.785644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.143 [2024-11-20 18:06:41.785657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.144 [2024-11-20 18:06:41.785668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.144 [2024-11-20 18:06:41.785686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.144 [2024-11-20 18:06:41.785705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.144 [2024-11-20 18:06:41.785724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.144 [2024-11-20 18:06:41.785742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.144 [2024-11-20 18:06:41.785761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.144 [2024-11-20 18:06:41.785778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.144 [2024-11-20 18:06:41.785795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.144 [2024-11-20 18:06:41.785812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.144 [2024-11-20 18:06:41.785830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.144 [2024-11-20 18:06:41.785847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.144 [2024-11-20 18:06:41.785864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.144 [2024-11-20 18:06:41.785881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.144 [2024-11-20 18:06:41.785898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.144 [2024-11-20 18:06:41.785915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.144 [2024-11-20 18:06:41.785934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.785952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.785969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.785985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.785995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.144 [2024-11-20 18:06:41.786418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.144 [2024-11-20 18:06:41.786426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.145 [2024-11-20 18:06:41.786443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.786990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.786997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.787007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.145 [2024-11-20 18:06:41.787014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.787023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.145 [2024-11-20 18:06:41.787031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.787040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.145 [2024-11-20 18:06:41.787047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.787057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.145 [2024-11-20 18:06:41.787064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.787073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.145 [2024-11-20 18:06:41.787081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.787090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.145 [2024-11-20 18:06:41.787098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.145 [2024-11-20 18:06:41.787108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.146 [2024-11-20 18:06:41.787635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.146 [2024-11-20 18:06:41.787652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc660 is same with the state(6) to be set 00:39:42.146 [2024-11-20 18:06:41.787670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:42.146 [2024-11-20 18:06:41.787676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:42.146 [2024-11-20 18:06:41.787684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102776 len:8 PRP1 0x0 PRP2 0x0 00:39:42.146 [2024-11-20 18:06:41.787694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:42.146 [2024-11-20 18:06:41.787731] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20bc660 was disconnected and freed. reset controller. 00:39:42.146 [2024-11-20 18:06:41.791308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.146 [2024-11-20 18:06:41.791357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.146 [2024-11-20 18:06:41.792116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.146 [2024-11-20 18:06:41.792135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.146 [2024-11-20 18:06:41.792143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.146 [2024-11-20 18:06:41.792367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.146 [2024-11-20 18:06:41.792585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.146 [2024-11-20 18:06:41.792598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.146 [2024-11-20 18:06:41.792606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.146 [2024-11-20 18:06:41.796105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.146 [2024-11-20 18:06:41.805395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.147 [2024-11-20 18:06:41.805951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.147 [2024-11-20 18:06:41.805969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.147 [2024-11-20 18:06:41.805977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.147 [2024-11-20 18:06:41.806201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.147 [2024-11-20 18:06:41.806419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.147 [2024-11-20 18:06:41.806428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.147 [2024-11-20 18:06:41.806435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.147 [2024-11-20 18:06:41.809934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.147 [2024-11-20 18:06:41.819235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.147 [2024-11-20 18:06:41.819790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.147 [2024-11-20 18:06:41.819807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.147 [2024-11-20 18:06:41.819814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.147 [2024-11-20 18:06:41.820030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.147 [2024-11-20 18:06:41.820254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.147 [2024-11-20 18:06:41.820264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.147 [2024-11-20 18:06:41.820271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.147 [2024-11-20 18:06:41.823767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.147 [2024-11-20 18:06:41.833059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.147 [2024-11-20 18:06:41.833591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.147 [2024-11-20 18:06:41.833608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.147 [2024-11-20 18:06:41.833616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.147 [2024-11-20 18:06:41.833831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.147 [2024-11-20 18:06:41.834048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.147 [2024-11-20 18:06:41.834057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.147 [2024-11-20 18:06:41.834064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.147 [2024-11-20 18:06:41.837565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.147 [2024-11-20 18:06:41.846851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.147 [2024-11-20 18:06:41.847455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.147 [2024-11-20 18:06:41.847496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.147 [2024-11-20 18:06:41.847507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.147 [2024-11-20 18:06:41.847745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.147 [2024-11-20 18:06:41.847966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.147 [2024-11-20 18:06:41.847976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.147 [2024-11-20 18:06:41.847984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.147 [2024-11-20 18:06:41.851498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.147 [2024-11-20 18:06:41.860785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.147 [2024-11-20 18:06:41.861432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.147 [2024-11-20 18:06:41.861472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.147 [2024-11-20 18:06:41.861483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.147 [2024-11-20 18:06:41.861720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.147 [2024-11-20 18:06:41.861940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.147 [2024-11-20 18:06:41.861950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.147 [2024-11-20 18:06:41.861958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.147 [2024-11-20 18:06:41.865476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.147 [2024-11-20 18:06:41.874559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.147 [2024-11-20 18:06:41.875230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.147 [2024-11-20 18:06:41.875271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.147 [2024-11-20 18:06:41.875283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.147 [2024-11-20 18:06:41.875523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.147 [2024-11-20 18:06:41.875745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.147 [2024-11-20 18:06:41.875754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.147 [2024-11-20 18:06:41.875762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.147 [2024-11-20 18:06:41.879272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.147 [2024-11-20 18:06:41.888339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.147 [2024-11-20 18:06:41.888803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.147 [2024-11-20 18:06:41.888823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.147 [2024-11-20 18:06:41.888831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.147 [2024-11-20 18:06:41.889052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.147 [2024-11-20 18:06:41.889276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.147 [2024-11-20 18:06:41.889286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.147 [2024-11-20 18:06:41.889293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.147 [2024-11-20 18:06:41.892787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.147 [2024-11-20 18:06:41.902265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.147 [2024-11-20 18:06:41.902914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.147 [2024-11-20 18:06:41.902954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.147 [2024-11-20 18:06:41.902965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.147 [2024-11-20 18:06:41.903209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.147 [2024-11-20 18:06:41.903431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.147 [2024-11-20 18:06:41.903442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.147 [2024-11-20 18:06:41.903450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.147 [2024-11-20 18:06:41.906949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.148 [2024-11-20 18:06:41.916031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.148 [2024-11-20 18:06:41.916711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.148 [2024-11-20 18:06:41.916750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.148 [2024-11-20 18:06:41.916762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.148 [2024-11-20 18:06:41.916998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.148 [2024-11-20 18:06:41.917226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.148 [2024-11-20 18:06:41.917236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.148 [2024-11-20 18:06:41.917245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.148 [2024-11-20 18:06:41.920745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.148 [2024-11-20 18:06:41.929828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.148 [2024-11-20 18:06:41.930371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.148 [2024-11-20 18:06:41.930392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.148 [2024-11-20 18:06:41.930400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.148 [2024-11-20 18:06:41.930618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.148 [2024-11-20 18:06:41.930835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.148 [2024-11-20 18:06:41.930844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.148 [2024-11-20 18:06:41.930856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.148 [2024-11-20 18:06:41.934355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.148 [2024-11-20 18:06:41.943624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.148 [2024-11-20 18:06:41.944183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.148 [2024-11-20 18:06:41.944201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.148 [2024-11-20 18:06:41.944209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.148 [2024-11-20 18:06:41.944426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.148 [2024-11-20 18:06:41.944642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.148 [2024-11-20 18:06:41.944651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.148 [2024-11-20 18:06:41.944658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.148 [2024-11-20 18:06:41.948153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.148 [2024-11-20 18:06:41.957430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.148 [2024-11-20 18:06:41.958059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.148 [2024-11-20 18:06:41.958100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.148 [2024-11-20 18:06:41.958111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.148 [2024-11-20 18:06:41.958356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.148 [2024-11-20 18:06:41.958578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.148 [2024-11-20 18:06:41.958589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.148 [2024-11-20 18:06:41.958597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.148 [2024-11-20 18:06:41.962096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.148 [2024-11-20 18:06:41.971168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.148 [2024-11-20 18:06:41.971758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.148 [2024-11-20 18:06:41.971779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.148 [2024-11-20 18:06:41.971787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.148 [2024-11-20 18:06:41.972004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.148 [2024-11-20 18:06:41.972227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.148 [2024-11-20 18:06:41.972238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.148 [2024-11-20 18:06:41.972245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.148 [2024-11-20 18:06:41.975740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.148 [2024-11-20 18:06:41.985011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.148 [2024-11-20 18:06:41.985659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.148 [2024-11-20 18:06:41.985699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.148 [2024-11-20 18:06:41.985710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.148 [2024-11-20 18:06:41.985946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.148 [2024-11-20 18:06:41.986175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.148 [2024-11-20 18:06:41.986186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.148 [2024-11-20 18:06:41.986193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.148 [2024-11-20 18:06:41.989695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.148 [2024-11-20 18:06:41.998763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.148 [2024-11-20 18:06:41.999476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.148 [2024-11-20 18:06:41.999515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.148 [2024-11-20 18:06:41.999527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.148 [2024-11-20 18:06:41.999763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.148 [2024-11-20 18:06:41.999984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.149 [2024-11-20 18:06:41.999994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.149 [2024-11-20 18:06:42.000002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.149 [2024-11-20 18:06:42.003508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.149 [2024-11-20 18:06:42.012695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.149 [2024-11-20 18:06:42.013262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.149 [2024-11-20 18:06:42.013303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.149 [2024-11-20 18:06:42.013315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.149 [2024-11-20 18:06:42.013554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.149 [2024-11-20 18:06:42.013774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.149 [2024-11-20 18:06:42.013785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.149 [2024-11-20 18:06:42.013793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.149 [2024-11-20 18:06:42.017303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.149 [2024-11-20 18:06:42.026592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.149 [2024-11-20 18:06:42.027259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.149 [2024-11-20 18:06:42.027300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.149 [2024-11-20 18:06:42.027313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.149 [2024-11-20 18:06:42.027552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.149 [2024-11-20 18:06:42.027778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.149 [2024-11-20 18:06:42.027788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.149 [2024-11-20 18:06:42.027796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.149 [2024-11-20 18:06:42.031307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.149 [2024-11-20 18:06:42.040377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.149 [2024-11-20 18:06:42.040968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.149 [2024-11-20 18:06:42.041008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.149 [2024-11-20 18:06:42.041021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.149 [2024-11-20 18:06:42.041266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.149 [2024-11-20 18:06:42.041487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.149 [2024-11-20 18:06:42.041499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.149 [2024-11-20 18:06:42.041508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.149 [2024-11-20 18:06:42.045010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.412 [2024-11-20 18:06:42.054288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.412 [2024-11-20 18:06:42.054781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.412 [2024-11-20 18:06:42.054802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.412 [2024-11-20 18:06:42.054811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.412 [2024-11-20 18:06:42.055028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.412 [2024-11-20 18:06:42.055253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.412 [2024-11-20 18:06:42.055264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.412 [2024-11-20 18:06:42.055273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.412 [2024-11-20 18:06:42.058772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.412 [2024-11-20 18:06:42.068042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.412 [2024-11-20 18:06:42.068666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.412 [2024-11-20 18:06:42.068706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.412 [2024-11-20 18:06:42.068717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.413 [2024-11-20 18:06:42.068954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.413 [2024-11-20 18:06:42.069184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.413 [2024-11-20 18:06:42.069195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.413 [2024-11-20 18:06:42.069202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.413 [2024-11-20 18:06:42.072709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.413 [2024-11-20 18:06:42.081781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.413 [2024-11-20 18:06:42.082282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.413 [2024-11-20 18:06:42.082322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.413 [2024-11-20 18:06:42.082333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.413 [2024-11-20 18:06:42.082569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.413 [2024-11-20 18:06:42.082790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.413 [2024-11-20 18:06:42.082800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.413 [2024-11-20 18:06:42.082808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.413 [2024-11-20 18:06:42.086317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.413 [2024-11-20 18:06:42.095599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.413 [2024-11-20 18:06:42.096229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.413 [2024-11-20 18:06:42.096269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.413 [2024-11-20 18:06:42.096282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.413 [2024-11-20 18:06:42.096520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.413 [2024-11-20 18:06:42.096741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.413 [2024-11-20 18:06:42.096752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.413 [2024-11-20 18:06:42.096760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.413 [2024-11-20 18:06:42.100268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.413 [2024-11-20 18:06:42.109548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.413 [2024-11-20 18:06:42.110241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.413 [2024-11-20 18:06:42.110282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.413 [2024-11-20 18:06:42.110294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.413 [2024-11-20 18:06:42.110534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.413 [2024-11-20 18:06:42.110755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.413 [2024-11-20 18:06:42.110764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.413 [2024-11-20 18:06:42.110773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.413 [2024-11-20 18:06:42.114296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.413 [2024-11-20 18:06:42.123369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.413 [2024-11-20 18:06:42.123939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.413 [2024-11-20 18:06:42.123964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.413 [2024-11-20 18:06:42.123973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.413 [2024-11-20 18:06:42.124197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.413 [2024-11-20 18:06:42.124415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.413 [2024-11-20 18:06:42.124426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.413 [2024-11-20 18:06:42.124433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.413 [2024-11-20 18:06:42.127942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.413 [2024-11-20 18:06:42.137216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.413 [2024-11-20 18:06:42.137789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.413 [2024-11-20 18:06:42.137806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.413 [2024-11-20 18:06:42.137814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.413 [2024-11-20 18:06:42.138030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.413 [2024-11-20 18:06:42.138254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.413 [2024-11-20 18:06:42.138264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.413 [2024-11-20 18:06:42.138271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.413 [2024-11-20 18:06:42.141764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.413 [2024-11-20 18:06:42.151040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.413 [2024-11-20 18:06:42.151674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.413 [2024-11-20 18:06:42.151714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.413 [2024-11-20 18:06:42.151726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.413 [2024-11-20 18:06:42.151962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.413 [2024-11-20 18:06:42.152191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.413 [2024-11-20 18:06:42.152201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.413 [2024-11-20 18:06:42.152210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.413 9701.33 IOPS, 37.90 MiB/s [2024-11-20T17:06:42.329Z] [2024-11-20 18:06:42.157362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.413 [2024-11-20 18:06:42.164785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.413 [2024-11-20 18:06:42.165322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.413 [2024-11-20 18:06:42.165343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.413 [2024-11-20 18:06:42.165351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.413 [2024-11-20 18:06:42.165568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.413 [2024-11-20 18:06:42.165791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.413 [2024-11-20 18:06:42.165801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.413 [2024-11-20 18:06:42.165808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.413 [2024-11-20 18:06:42.169309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.413 [2024-11-20 18:06:42.178579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.413 [2024-11-20 18:06:42.179246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.413 [2024-11-20 18:06:42.179286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.413 [2024-11-20 18:06:42.179297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.413 [2024-11-20 18:06:42.179534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.413 [2024-11-20 18:06:42.179754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.413 [2024-11-20 18:06:42.179765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.413 [2024-11-20 18:06:42.179773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.413 [2024-11-20 18:06:42.183282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.413 [2024-11-20 18:06:42.192350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.413 [2024-11-20 18:06:42.192891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.413 [2024-11-20 18:06:42.192910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.413 [2024-11-20 18:06:42.192919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.413 [2024-11-20 18:06:42.193135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.413 [2024-11-20 18:06:42.193358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.413 [2024-11-20 18:06:42.193369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.413 [2024-11-20 18:06:42.193376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.413 [2024-11-20 18:06:42.196870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.413 [2024-11-20 18:06:42.206144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.413 [2024-11-20 18:06:42.206668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.413 [2024-11-20 18:06:42.206687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.413 [2024-11-20 18:06:42.206695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.413 [2024-11-20 18:06:42.206912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.413 [2024-11-20 18:06:42.207129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.413 [2024-11-20 18:06:42.207138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.413 [2024-11-20 18:06:42.207145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.413 [2024-11-20 18:06:42.210657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.413 [2024-11-20 18:06:42.219939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.413 [2024-11-20 18:06:42.220561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.413 [2024-11-20 18:06:42.220601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.414 [2024-11-20 18:06:42.220612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.414 [2024-11-20 18:06:42.220849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.414 [2024-11-20 18:06:42.221070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.414 [2024-11-20 18:06:42.221080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.414 [2024-11-20 18:06:42.221088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.414 [2024-11-20 18:06:42.224596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.414 [2024-11-20 18:06:42.233680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.414 [2024-11-20 18:06:42.234274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.414 [2024-11-20 18:06:42.234313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.414 [2024-11-20 18:06:42.234325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.414 [2024-11-20 18:06:42.234565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.414 [2024-11-20 18:06:42.234786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.414 [2024-11-20 18:06:42.234797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.414 [2024-11-20 18:06:42.234805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.414 [2024-11-20 18:06:42.238316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.414 [2024-11-20 18:06:42.247588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.414 [2024-11-20 18:06:42.248271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.414 [2024-11-20 18:06:42.248310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.414 [2024-11-20 18:06:42.248322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.414 [2024-11-20 18:06:42.248558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.414 [2024-11-20 18:06:42.248779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.414 [2024-11-20 18:06:42.248788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.414 [2024-11-20 18:06:42.248797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.414 [2024-11-20 18:06:42.252305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.414 [2024-11-20 18:06:42.261375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.414 [2024-11-20 18:06:42.261939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.414 [2024-11-20 18:06:42.261959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.414 [2024-11-20 18:06:42.261972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.414 [2024-11-20 18:06:42.262195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.414 [2024-11-20 18:06:42.262413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.414 [2024-11-20 18:06:42.262422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.414 [2024-11-20 18:06:42.262430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.414 [2024-11-20 18:06:42.265926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.414 [2024-11-20 18:06:42.275200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.414 [2024-11-20 18:06:42.275834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.414 [2024-11-20 18:06:42.275874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.414 [2024-11-20 18:06:42.275885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.414 [2024-11-20 18:06:42.276122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.414 [2024-11-20 18:06:42.276350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.414 [2024-11-20 18:06:42.276361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.414 [2024-11-20 18:06:42.276369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.414 [2024-11-20 18:06:42.279871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.414 [2024-11-20 18:06:42.288941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.414 [2024-11-20 18:06:42.289485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.414 [2024-11-20 18:06:42.289506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.414 [2024-11-20 18:06:42.289514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.414 [2024-11-20 18:06:42.289731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.414 [2024-11-20 18:06:42.289948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.414 [2024-11-20 18:06:42.289957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.414 [2024-11-20 18:06:42.289964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.414 [2024-11-20 18:06:42.293468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.414 [2024-11-20 18:06:42.302739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.414 [2024-11-20 18:06:42.303265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.414 [2024-11-20 18:06:42.303305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.414 [2024-11-20 18:06:42.303317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.414 [2024-11-20 18:06:42.303557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.414 [2024-11-20 18:06:42.303778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.414 [2024-11-20 18:06:42.303793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.414 [2024-11-20 18:06:42.303802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.414 [2024-11-20 18:06:42.307313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.414 [2024-11-20 18:06:42.316597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.414 [2024-11-20 18:06:42.317134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.414 [2024-11-20 18:06:42.317153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.414 [2024-11-20 18:06:42.317168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.414 [2024-11-20 18:06:42.317385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.414 [2024-11-20 18:06:42.317602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.414 [2024-11-20 18:06:42.317611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.414 [2024-11-20 18:06:42.317618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.414 [2024-11-20 18:06:42.321112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.676 [2024-11-20 18:06:42.330397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.676 [2024-11-20 18:06:42.331053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.676 [2024-11-20 18:06:42.331094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.677 [2024-11-20 18:06:42.331105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.677 [2024-11-20 18:06:42.331350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.677 [2024-11-20 18:06:42.331571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.677 [2024-11-20 18:06:42.331581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.677 [2024-11-20 18:06:42.331589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.677 [2024-11-20 18:06:42.335090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.677 [2024-11-20 18:06:42.344163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.677 [2024-11-20 18:06:42.344737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.677 [2024-11-20 18:06:42.344756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.677 [2024-11-20 18:06:42.344765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.677 [2024-11-20 18:06:42.344981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.677 [2024-11-20 18:06:42.345205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.677 [2024-11-20 18:06:42.345223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.677 [2024-11-20 18:06:42.345231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.677 [2024-11-20 18:06:42.348729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.677 [2024-11-20 18:06:42.358002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.677 [2024-11-20 18:06:42.358652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.677 [2024-11-20 18:06:42.358693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.677 [2024-11-20 18:06:42.358704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.677 [2024-11-20 18:06:42.358941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.677 [2024-11-20 18:06:42.359169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.677 [2024-11-20 18:06:42.359180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.677 [2024-11-20 18:06:42.359189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.677 [2024-11-20 18:06:42.362688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.677 [2024-11-20 18:06:42.371752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.677 [2024-11-20 18:06:42.372479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.677 [2024-11-20 18:06:42.372519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.677 [2024-11-20 18:06:42.372530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.677 [2024-11-20 18:06:42.372766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.677 [2024-11-20 18:06:42.372987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.677 [2024-11-20 18:06:42.372997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.677 [2024-11-20 18:06:42.373005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.677 [2024-11-20 18:06:42.376515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.677 [2024-11-20 18:06:42.385578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.677 [2024-11-20 18:06:42.386253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.677 [2024-11-20 18:06:42.386294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.677 [2024-11-20 18:06:42.386307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.677 [2024-11-20 18:06:42.386545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.677 [2024-11-20 18:06:42.386765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.677 [2024-11-20 18:06:42.386775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.677 [2024-11-20 18:06:42.386783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.677 [2024-11-20 18:06:42.390291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.677 [2024-11-20 18:06:42.399358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.677 [2024-11-20 18:06:42.400023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.677 [2024-11-20 18:06:42.400063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.677 [2024-11-20 18:06:42.400074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.677 [2024-11-20 18:06:42.400325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.677 [2024-11-20 18:06:42.400547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.677 [2024-11-20 18:06:42.400556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.677 [2024-11-20 18:06:42.400564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.677 [2024-11-20 18:06:42.404064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.677 [2024-11-20 18:06:42.413143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.677 [2024-11-20 18:06:42.413809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.677 [2024-11-20 18:06:42.413848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.677 [2024-11-20 18:06:42.413860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.677 [2024-11-20 18:06:42.414096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.677 [2024-11-20 18:06:42.414326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.677 [2024-11-20 18:06:42.414337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.677 [2024-11-20 18:06:42.414345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.677 [2024-11-20 18:06:42.417846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.677 [2024-11-20 18:06:42.426919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.677 [2024-11-20 18:06:42.427540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.677 [2024-11-20 18:06:42.427579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.677 [2024-11-20 18:06:42.427590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.677 [2024-11-20 18:06:42.427826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.677 [2024-11-20 18:06:42.428046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.677 [2024-11-20 18:06:42.428057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.677 [2024-11-20 18:06:42.428064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.677 [2024-11-20 18:06:42.431572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.677 [2024-11-20 18:06:42.440841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.677 [2024-11-20 18:06:42.441481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.677 [2024-11-20 18:06:42.441521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.677 [2024-11-20 18:06:42.441532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.677 [2024-11-20 18:06:42.441769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.677 [2024-11-20 18:06:42.441988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.677 [2024-11-20 18:06:42.441998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.677 [2024-11-20 18:06:42.442011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.677 [2024-11-20 18:06:42.445523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.677 [2024-11-20 18:06:42.454589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.677 [2024-11-20 18:06:42.455122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.677 [2024-11-20 18:06:42.455142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.677 [2024-11-20 18:06:42.455151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.677 [2024-11-20 18:06:42.455375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.677 [2024-11-20 18:06:42.455593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.677 [2024-11-20 18:06:42.455602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.677 [2024-11-20 18:06:42.455610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.677 [2024-11-20 18:06:42.459103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.677 [2024-11-20 18:06:42.468403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.677 [2024-11-20 18:06:42.469034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.677 [2024-11-20 18:06:42.469074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.677 [2024-11-20 18:06:42.469085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.677 [2024-11-20 18:06:42.469331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.677 [2024-11-20 18:06:42.469553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.677 [2024-11-20 18:06:42.469562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.677 [2024-11-20 18:06:42.469570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.677 [2024-11-20 18:06:42.473069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.677 [2024-11-20 18:06:42.482344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.677 [2024-11-20 18:06:42.482946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.677 [2024-11-20 18:06:42.482986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.677 [2024-11-20 18:06:42.482997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.677 [2024-11-20 18:06:42.483241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.677 [2024-11-20 18:06:42.483463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.677 [2024-11-20 18:06:42.483472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.677 [2024-11-20 18:06:42.483480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.677 [2024-11-20 18:06:42.486982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.677 [2024-11-20 18:06:42.496255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.677 [2024-11-20 18:06:42.496893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.677 [2024-11-20 18:06:42.496933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.677 [2024-11-20 18:06:42.496944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.677 [2024-11-20 18:06:42.497189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.677 [2024-11-20 18:06:42.497411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.677 [2024-11-20 18:06:42.497422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.677 [2024-11-20 18:06:42.497430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.677 [2024-11-20 18:06:42.500928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.677 [2024-11-20 18:06:42.510204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.677 [2024-11-20 18:06:42.510881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.677 [2024-11-20 18:06:42.510920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.677 [2024-11-20 18:06:42.510931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.678 [2024-11-20 18:06:42.511177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.678 [2024-11-20 18:06:42.511398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.678 [2024-11-20 18:06:42.511408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.678 [2024-11-20 18:06:42.511416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.678 [2024-11-20 18:06:42.514928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.678 [2024-11-20 18:06:42.523996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.678 [2024-11-20 18:06:42.524640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.678 [2024-11-20 18:06:42.524680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.678 [2024-11-20 18:06:42.524691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.678 [2024-11-20 18:06:42.524928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.678 [2024-11-20 18:06:42.525148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.678 [2024-11-20 18:06:42.525172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.678 [2024-11-20 18:06:42.525181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.678 [2024-11-20 18:06:42.528683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.678 [2024-11-20 18:06:42.537745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.678 [2024-11-20 18:06:42.538294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.678 [2024-11-20 18:06:42.538334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.678 [2024-11-20 18:06:42.538345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.678 [2024-11-20 18:06:42.538582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.678 [2024-11-20 18:06:42.538807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.678 [2024-11-20 18:06:42.538817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.678 [2024-11-20 18:06:42.538824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.678 [2024-11-20 18:06:42.542336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.678 [2024-11-20 18:06:42.551613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.678 [2024-11-20 18:06:42.552226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.678 [2024-11-20 18:06:42.552266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.678 [2024-11-20 18:06:42.552279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.678 [2024-11-20 18:06:42.552516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.678 [2024-11-20 18:06:42.552736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.678 [2024-11-20 18:06:42.552747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.678 [2024-11-20 18:06:42.552755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.678 [2024-11-20 18:06:42.556264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.678 [2024-11-20 18:06:42.565537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.678 [2024-11-20 18:06:42.566170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.678 [2024-11-20 18:06:42.566210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.678 [2024-11-20 18:06:42.566221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.678 [2024-11-20 18:06:42.566457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.678 [2024-11-20 18:06:42.566677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.678 [2024-11-20 18:06:42.566687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.678 [2024-11-20 18:06:42.566696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.678 [2024-11-20 18:06:42.570197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.678 [2024-11-20 18:06:42.579469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.678 [2024-11-20 18:06:42.580006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.678 [2024-11-20 18:06:42.580026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.678 [2024-11-20 18:06:42.580034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.678 [2024-11-20 18:06:42.580257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.678 [2024-11-20 18:06:42.580474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.678 [2024-11-20 18:06:42.580484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.678 [2024-11-20 18:06:42.580492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.678 [2024-11-20 18:06:42.583988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.940 [2024-11-20 18:06:42.593261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.940 [2024-11-20 18:06:42.593916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.940 [2024-11-20 18:06:42.593955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.940 [2024-11-20 18:06:42.593967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.940 [2024-11-20 18:06:42.594213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.940 [2024-11-20 18:06:42.594435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.940 [2024-11-20 18:06:42.594445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.940 [2024-11-20 18:06:42.594452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.940 [2024-11-20 18:06:42.597952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.940 [2024-11-20 18:06:42.607019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.940 [2024-11-20 18:06:42.607661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.940 [2024-11-20 18:06:42.607694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.940 [2024-11-20 18:06:42.607703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.940 [2024-11-20 18:06:42.607868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.940 [2024-11-20 18:06:42.608020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.940 [2024-11-20 18:06:42.608027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.940 [2024-11-20 18:06:42.608033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.940 [2024-11-20 18:06:42.610444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.940 [2024-11-20 18:06:42.619639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.940 [2024-11-20 18:06:42.620217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.940 [2024-11-20 18:06:42.620249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.940 [2024-11-20 18:06:42.620259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.940 [2024-11-20 18:06:42.620424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.940 [2024-11-20 18:06:42.620576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.940 [2024-11-20 18:06:42.620584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.940 [2024-11-20 18:06:42.620591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.940 [2024-11-20 18:06:42.622998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.940 [2024-11-20 18:06:42.632340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.940 [2024-11-20 18:06:42.632794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.940 [2024-11-20 18:06:42.632815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.940 [2024-11-20 18:06:42.632822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.940 [2024-11-20 18:06:42.632971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.940 [2024-11-20 18:06:42.633121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.940 [2024-11-20 18:06:42.633128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.940 [2024-11-20 18:06:42.633134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.940 [2024-11-20 18:06:42.635539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.940 [2024-11-20 18:06:42.645001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.940 [2024-11-20 18:06:42.645544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.940 [2024-11-20 18:06:42.645576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.940 [2024-11-20 18:06:42.645585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.940 [2024-11-20 18:06:42.645750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.940 [2024-11-20 18:06:42.645904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.940 [2024-11-20 18:06:42.645911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.940 [2024-11-20 18:06:42.645917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.940 [2024-11-20 18:06:42.648325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.940 [2024-11-20 18:06:42.657688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.940 [2024-11-20 18:06:42.658140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.940 [2024-11-20 18:06:42.658155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.940 [2024-11-20 18:06:42.658165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.940 [2024-11-20 18:06:42.658315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.940 [2024-11-20 18:06:42.658464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.940 [2024-11-20 18:06:42.658471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.940 [2024-11-20 18:06:42.658476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.940 [2024-11-20 18:06:42.660872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.940 [2024-11-20 18:06:42.670335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.940 [2024-11-20 18:06:42.670780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.940 [2024-11-20 18:06:42.670793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.940 [2024-11-20 18:06:42.670799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.940 [2024-11-20 18:06:42.670947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.940 [2024-11-20 18:06:42.671101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.940 [2024-11-20 18:06:42.671108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.940 [2024-11-20 18:06:42.671113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.940 [2024-11-20 18:06:42.673516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.940 [2024-11-20 18:06:42.682975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.940 [2024-11-20 18:06:42.683466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.940 [2024-11-20 18:06:42.683479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.940 [2024-11-20 18:06:42.683485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.940 [2024-11-20 18:06:42.683633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.940 [2024-11-20 18:06:42.683783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.940 [2024-11-20 18:06:42.683790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.940 [2024-11-20 18:06:42.683795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.940 [2024-11-20 18:06:42.686195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.940 [2024-11-20 18:06:42.695648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.940 [2024-11-20 18:06:42.696094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.940 [2024-11-20 18:06:42.696106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.940 [2024-11-20 18:06:42.696111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.940 [2024-11-20 18:06:42.696264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.940 [2024-11-20 18:06:42.696414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.940 [2024-11-20 18:06:42.696420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.940 [2024-11-20 18:06:42.696426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.940 [2024-11-20 18:06:42.698820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.940 [2024-11-20 18:06:42.708274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.940 [2024-11-20 18:06:42.708706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.940 [2024-11-20 18:06:42.708719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.940 [2024-11-20 18:06:42.708724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.940 [2024-11-20 18:06:42.708873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.940 [2024-11-20 18:06:42.709023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.940 [2024-11-20 18:06:42.709029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.940 [2024-11-20 18:06:42.709035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.940 [2024-11-20 18:06:42.711435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.940 [2024-11-20 18:06:42.720902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.940 [2024-11-20 18:06:42.721403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.940 [2024-11-20 18:06:42.721416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.940 [2024-11-20 18:06:42.721422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.940 [2024-11-20 18:06:42.721571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.940 [2024-11-20 18:06:42.721720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.940 [2024-11-20 18:06:42.721727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.940 [2024-11-20 18:06:42.721732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.940 [2024-11-20 18:06:42.724128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.940 [2024-11-20 18:06:42.733591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.940 [2024-11-20 18:06:42.734074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.940 [2024-11-20 18:06:42.734087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.940 [2024-11-20 18:06:42.734093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.940 [2024-11-20 18:06:42.734246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.940 [2024-11-20 18:06:42.734397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.940 [2024-11-20 18:06:42.734403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.940 [2024-11-20 18:06:42.734409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.940 [2024-11-20 18:06:42.736803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.940 [2024-11-20 18:06:42.746255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.940 [2024-11-20 18:06:42.746692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.940 [2024-11-20 18:06:42.746704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.940 [2024-11-20 18:06:42.746709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.940 [2024-11-20 18:06:42.746858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.940 [2024-11-20 18:06:42.747007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.940 [2024-11-20 18:06:42.747013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.940 [2024-11-20 18:06:42.747019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.940 [2024-11-20 18:06:42.749419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.940 [2024-11-20 18:06:42.758874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.940 [2024-11-20 18:06:42.759324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.940 [2024-11-20 18:06:42.759337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.940 [2024-11-20 18:06:42.759345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.940 [2024-11-20 18:06:42.759494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.940 [2024-11-20 18:06:42.759643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.940 [2024-11-20 18:06:42.759650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.940 [2024-11-20 18:06:42.759655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.940 [2024-11-20 18:06:42.762051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.940 [2024-11-20 18:06:42.771513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.940 [2024-11-20 18:06:42.772007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.940 [2024-11-20 18:06:42.772039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.940 [2024-11-20 18:06:42.772048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.940 [2024-11-20 18:06:42.772220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.940 [2024-11-20 18:06:42.772373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.940 [2024-11-20 18:06:42.772380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.940 [2024-11-20 18:06:42.772386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.940 [2024-11-20 18:06:42.774787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.940 [2024-11-20 18:06:42.784105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.940 [2024-11-20 18:06:42.784700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.940 [2024-11-20 18:06:42.784732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.940 [2024-11-20 18:06:42.784741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.940 [2024-11-20 18:06:42.784906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.940 [2024-11-20 18:06:42.785058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.940 [2024-11-20 18:06:42.785065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.940 [2024-11-20 18:06:42.785071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.940 [2024-11-20 18:06:42.787481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.940 [2024-11-20 18:06:42.796808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.940 [2024-11-20 18:06:42.797466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.941 [2024-11-20 18:06:42.797498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.941 [2024-11-20 18:06:42.797507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.941 [2024-11-20 18:06:42.797671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.941 [2024-11-20 18:06:42.797823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.941 [2024-11-20 18:06:42.797834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.941 [2024-11-20 18:06:42.797840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.941 [2024-11-20 18:06:42.800252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.941 [2024-11-20 18:06:42.809437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.941 [2024-11-20 18:06:42.809950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.941 [2024-11-20 18:06:42.809982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.941 [2024-11-20 18:06:42.809991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.941 [2024-11-20 18:06:42.810155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.941 [2024-11-20 18:06:42.810315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.941 [2024-11-20 18:06:42.810323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.941 [2024-11-20 18:06:42.810329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.941 [2024-11-20 18:06:42.812732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.941 [2024-11-20 18:06:42.822145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.941 [2024-11-20 18:06:42.822744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.941 [2024-11-20 18:06:42.822776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.941 [2024-11-20 18:06:42.822785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.941 [2024-11-20 18:06:42.822952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.941 [2024-11-20 18:06:42.823104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.941 [2024-11-20 18:06:42.823111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.941 [2024-11-20 18:06:42.823117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.941 [2024-11-20 18:06:42.825523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.941 [2024-11-20 18:06:42.834853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.941 [2024-11-20 18:06:42.835469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.941 [2024-11-20 18:06:42.835501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.941 [2024-11-20 18:06:42.835510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.941 [2024-11-20 18:06:42.835675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.941 [2024-11-20 18:06:42.835827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.941 [2024-11-20 18:06:42.835834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.941 [2024-11-20 18:06:42.835840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.941 [2024-11-20 18:06:42.838248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:42.941 [2024-11-20 18:06:42.847425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:42.941 [2024-11-20 18:06:42.847999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:42.941 [2024-11-20 18:06:42.848031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:42.941 [2024-11-20 18:06:42.848040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:42.941 [2024-11-20 18:06:42.848212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:42.941 [2024-11-20 18:06:42.848365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:42.941 [2024-11-20 18:06:42.848372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:42.941 [2024-11-20 18:06:42.848378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:42.941 [2024-11-20 18:06:42.850781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.202 [2024-11-20 18:06:42.860106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.202 [2024-11-20 18:06:42.860658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.202 [2024-11-20 18:06:42.860691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.202 [2024-11-20 18:06:42.860699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.202 [2024-11-20 18:06:42.860864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.202 [2024-11-20 18:06:42.861016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.202 [2024-11-20 18:06:42.861023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.202 [2024-11-20 18:06:42.861029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.202 [2024-11-20 18:06:42.863439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.202 [2024-11-20 18:06:42.872757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.202 [2024-11-20 18:06:42.873259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.202 [2024-11-20 18:06:42.873291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.202 [2024-11-20 18:06:42.873300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.202 [2024-11-20 18:06:42.873467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.202 [2024-11-20 18:06:42.873620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.202 [2024-11-20 18:06:42.873627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.202 [2024-11-20 18:06:42.873633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.202 [2024-11-20 18:06:42.876041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.203 [2024-11-20 18:06:42.885362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.203 [2024-11-20 18:06:42.885960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.203 [2024-11-20 18:06:42.885992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.203 [2024-11-20 18:06:42.886001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.203 [2024-11-20 18:06:42.886181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.203 [2024-11-20 18:06:42.886334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.203 [2024-11-20 18:06:42.886341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.203 [2024-11-20 18:06:42.886347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.203 [2024-11-20 18:06:42.888751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.203 [2024-11-20 18:06:42.898070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.203 [2024-11-20 18:06:42.898666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.203 [2024-11-20 18:06:42.898698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.203 [2024-11-20 18:06:42.898707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.203 [2024-11-20 18:06:42.898872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.203 [2024-11-20 18:06:42.899024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.203 [2024-11-20 18:06:42.899031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.203 [2024-11-20 18:06:42.899036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.203 [2024-11-20 18:06:42.901443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.203 [2024-11-20 18:06:42.910763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.203 [2024-11-20 18:06:42.911284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.203 [2024-11-20 18:06:42.911316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.203 [2024-11-20 18:06:42.911326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.203 [2024-11-20 18:06:42.911493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.203 [2024-11-20 18:06:42.911645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.203 [2024-11-20 18:06:42.911652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.203 [2024-11-20 18:06:42.911658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.203 [2024-11-20 18:06:42.914065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.203 [2024-11-20 18:06:42.923398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.203 [2024-11-20 18:06:42.923986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.203 [2024-11-20 18:06:42.924018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.203 [2024-11-20 18:06:42.924026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.203 [2024-11-20 18:06:42.924198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.203 [2024-11-20 18:06:42.924351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.203 [2024-11-20 18:06:42.924358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.203 [2024-11-20 18:06:42.924367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.203 [2024-11-20 18:06:42.926770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.203 [2024-11-20 18:06:42.936099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.203 [2024-11-20 18:06:42.936657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.203 [2024-11-20 18:06:42.936689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.203 [2024-11-20 18:06:42.936697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.203 [2024-11-20 18:06:42.936862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.203 [2024-11-20 18:06:42.937014] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.203 [2024-11-20 18:06:42.937021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.203 [2024-11-20 18:06:42.937027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.203 [2024-11-20 18:06:42.939436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.203 [2024-11-20 18:06:42.948754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.203 [2024-11-20 18:06:42.949074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.203 [2024-11-20 18:06:42.949091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.203 [2024-11-20 18:06:42.949097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.203 [2024-11-20 18:06:42.949252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.203 [2024-11-20 18:06:42.949403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.203 [2024-11-20 18:06:42.949410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.203 [2024-11-20 18:06:42.949415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.203 [2024-11-20 18:06:42.951812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.203 [2024-11-20 18:06:42.961423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.203 [2024-11-20 18:06:42.961906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.203 [2024-11-20 18:06:42.961919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.203 [2024-11-20 18:06:42.961924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.203 [2024-11-20 18:06:42.962073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.203 [2024-11-20 18:06:42.962229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.203 [2024-11-20 18:06:42.962236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.203 [2024-11-20 18:06:42.962241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.203 [2024-11-20 18:06:42.964639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.203 [2024-11-20 18:06:42.974101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.203 [2024-11-20 18:06:42.974676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.203 [2024-11-20 18:06:42.974711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.203 [2024-11-20 18:06:42.974720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.203 [2024-11-20 18:06:42.974885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.203 [2024-11-20 18:06:42.975037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.203 [2024-11-20 18:06:42.975044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.203 [2024-11-20 18:06:42.975050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.203 [2024-11-20 18:06:42.977459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.203 [2024-11-20 18:06:42.986789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.203 [2024-11-20 18:06:42.987383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.203 [2024-11-20 18:06:42.987415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.203 [2024-11-20 18:06:42.987424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.203 [2024-11-20 18:06:42.987589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.203 [2024-11-20 18:06:42.987741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.203 [2024-11-20 18:06:42.987748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.203 [2024-11-20 18:06:42.987754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.203 [2024-11-20 18:06:42.990163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.203 [2024-11-20 18:06:42.999484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.203 [2024-11-20 18:06:43.000068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.203 [2024-11-20 18:06:43.000100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.203 [2024-11-20 18:06:43.000109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.203 [2024-11-20 18:06:43.000284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.203 [2024-11-20 18:06:43.000436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.203 [2024-11-20 18:06:43.000444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.203 [2024-11-20 18:06:43.000449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.203 [2024-11-20 18:06:43.002852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.203 [2024-11-20 18:06:43.012173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.203 [2024-11-20 18:06:43.012717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.203 [2024-11-20 18:06:43.012749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.203 [2024-11-20 18:06:43.012757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.203 [2024-11-20 18:06:43.012922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.203 [2024-11-20 18:06:43.013078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.203 [2024-11-20 18:06:43.013085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.203 [2024-11-20 18:06:43.013091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.203 [2024-11-20 18:06:43.015509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.203 [2024-11-20 18:06:43.024832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.203 [2024-11-20 18:06:43.025323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.203 [2024-11-20 18:06:43.025355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.203 [2024-11-20 18:06:43.025364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.203 [2024-11-20 18:06:43.025530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.203 [2024-11-20 18:06:43.025682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.203 [2024-11-20 18:06:43.025689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.203 [2024-11-20 18:06:43.025695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.203 [2024-11-20 18:06:43.028100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.203 [2024-11-20 18:06:43.037433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.203 [2024-11-20 18:06:43.037987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.203 [2024-11-20 18:06:43.038019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.203 [2024-11-20 18:06:43.038028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.203 [2024-11-20 18:06:43.038201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.203 [2024-11-20 18:06:43.038354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.203 [2024-11-20 18:06:43.038361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.203 [2024-11-20 18:06:43.038366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.203 [2024-11-20 18:06:43.040769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.203 [2024-11-20 18:06:43.050087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.203 [2024-11-20 18:06:43.050680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.203 [2024-11-20 18:06:43.050711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.203 [2024-11-20 18:06:43.050720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.203 [2024-11-20 18:06:43.050885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.203 [2024-11-20 18:06:43.051037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.203 [2024-11-20 18:06:43.051044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.203 [2024-11-20 18:06:43.051050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.203 [2024-11-20 18:06:43.053464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.203 [2024-11-20 18:06:43.062785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.203 [2024-11-20 18:06:43.063408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.203 [2024-11-20 18:06:43.063441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.203 [2024-11-20 18:06:43.063450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.203 [2024-11-20 18:06:43.063616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.203 [2024-11-20 18:06:43.063768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.203 [2024-11-20 18:06:43.063775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.203 [2024-11-20 18:06:43.063782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.203 [2024-11-20 18:06:43.066191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.203 [2024-11-20 18:06:43.075374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.203 [2024-11-20 18:06:43.075921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.203 [2024-11-20 18:06:43.075953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.203 [2024-11-20 18:06:43.075962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.203 [2024-11-20 18:06:43.076128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.203 [2024-11-20 18:06:43.076288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.203 [2024-11-20 18:06:43.076296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.203 [2024-11-20 18:06:43.076302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.203 [2024-11-20 18:06:43.078703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.203 [2024-11-20 18:06:43.088020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.203 [2024-11-20 18:06:43.088363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.203 [2024-11-20 18:06:43.088380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.204 [2024-11-20 18:06:43.088386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.204 [2024-11-20 18:06:43.088536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.204 [2024-11-20 18:06:43.088687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.204 [2024-11-20 18:06:43.088693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.204 [2024-11-20 18:06:43.088698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.204 [2024-11-20 18:06:43.091100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.204 [2024-11-20 18:06:43.100697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.204 [2024-11-20 18:06:43.101179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.204 [2024-11-20 18:06:43.101192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.204 [2024-11-20 18:06:43.101202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.204 [2024-11-20 18:06:43.101350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.204 [2024-11-20 18:06:43.101500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.204 [2024-11-20 18:06:43.101507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.204 [2024-11-20 18:06:43.101512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.204 [2024-11-20 18:06:43.103910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.204 [2024-11-20 18:06:43.113377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.204 [2024-11-20 18:06:43.113957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.204 [2024-11-20 18:06:43.113989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.204 [2024-11-20 18:06:43.113998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.204 [2024-11-20 18:06:43.114172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.204 [2024-11-20 18:06:43.114325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.204 [2024-11-20 18:06:43.114333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.204 [2024-11-20 18:06:43.114339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.466 [2024-11-20 18:06:43.116752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.466 [2024-11-20 18:06:43.126083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.466 [2024-11-20 18:06:43.126594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.466 [2024-11-20 18:06:43.126626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.466 [2024-11-20 18:06:43.126635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.466 [2024-11-20 18:06:43.126799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.466 [2024-11-20 18:06:43.126951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.466 [2024-11-20 18:06:43.126959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.466 [2024-11-20 18:06:43.126964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.466 [2024-11-20 18:06:43.129383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.466 [2024-11-20 18:06:43.138704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.466 [2024-11-20 18:06:43.139299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.466 [2024-11-20 18:06:43.139332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.466 [2024-11-20 18:06:43.139341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.466 [2024-11-20 18:06:43.139506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.466 [2024-11-20 18:06:43.139658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.466 [2024-11-20 18:06:43.139668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.466 [2024-11-20 18:06:43.139674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.466 [2024-11-20 18:06:43.142083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.466 [2024-11-20 18:06:43.151407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.466 [2024-11-20 18:06:43.151997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.466 [2024-11-20 18:06:43.152030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.466 [2024-11-20 18:06:43.152038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.466 [2024-11-20 18:06:43.152212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.466 [2024-11-20 18:06:43.152369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.466 [2024-11-20 18:06:43.152377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.466 [2024-11-20 18:06:43.152382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.466 7276.00 IOPS, 28.42 MiB/s [2024-11-20T17:06:43.382Z] [2024-11-20 18:06:43.155913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.466 [2024-11-20 18:06:43.164113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.466 [2024-11-20 18:06:43.164665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.466 [2024-11-20 18:06:43.164696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.466 [2024-11-20 18:06:43.164705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.466 [2024-11-20 18:06:43.164870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.466 [2024-11-20 18:06:43.165023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.466 [2024-11-20 18:06:43.165030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.466 [2024-11-20 18:06:43.165036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.466 [2024-11-20 18:06:43.167448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.466 [2024-11-20 18:06:43.176786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.466 [2024-11-20 18:06:43.177423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.466 [2024-11-20 18:06:43.177454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.466 [2024-11-20 18:06:43.177463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.466 [2024-11-20 18:06:43.177628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.466 [2024-11-20 18:06:43.177781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.466 [2024-11-20 18:06:43.177788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.466 [2024-11-20 18:06:43.177793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.466 [2024-11-20 18:06:43.180204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.466 [2024-11-20 18:06:43.189387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.466 [2024-11-20 18:06:43.189984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.466 [2024-11-20 18:06:43.190015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.466 [2024-11-20 18:06:43.190024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.466 [2024-11-20 18:06:43.190197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.466 [2024-11-20 18:06:43.190350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.466 [2024-11-20 18:06:43.190358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.466 [2024-11-20 18:06:43.190364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.466 [2024-11-20 18:06:43.192766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.466 [2024-11-20 18:06:43.202083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.466 [2024-11-20 18:06:43.202629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.466 [2024-11-20 18:06:43.202661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.466 [2024-11-20 18:06:43.202669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.466 [2024-11-20 18:06:43.202834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.466 [2024-11-20 18:06:43.202986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.466 [2024-11-20 18:06:43.202993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.466 [2024-11-20 18:06:43.202999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.466 [2024-11-20 18:06:43.205410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.466 [2024-11-20 18:06:43.214731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.466 [2024-11-20 18:06:43.215333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.466 [2024-11-20 18:06:43.215365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.466 [2024-11-20 18:06:43.215374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.466 [2024-11-20 18:06:43.215539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.466 [2024-11-20 18:06:43.215699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.466 [2024-11-20 18:06:43.215706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.466 [2024-11-20 18:06:43.215712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.466 [2024-11-20 18:06:43.218121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.466 [2024-11-20 18:06:43.227442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.466 [2024-11-20 18:06:43.227986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.466 [2024-11-20 18:06:43.228018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.466 [2024-11-20 18:06:43.228030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.466 [2024-11-20 18:06:43.228208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.466 [2024-11-20 18:06:43.228361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.466 [2024-11-20 18:06:43.228368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.466 [2024-11-20 18:06:43.228374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.466 [2024-11-20 18:06:43.230777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.466 [2024-11-20 18:06:43.240098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.466 [2024-11-20 18:06:43.240688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.466 [2024-11-20 18:06:43.240719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.466 [2024-11-20 18:06:43.240728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.466 [2024-11-20 18:06:43.240893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.466 [2024-11-20 18:06:43.241046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.466 [2024-11-20 18:06:43.241053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.466 [2024-11-20 18:06:43.241059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.466 [2024-11-20 18:06:43.243470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.466 [2024-11-20 18:06:43.252789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.466 [2024-11-20 18:06:43.253385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.466 [2024-11-20 18:06:43.253416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.466 [2024-11-20 18:06:43.253425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.466 [2024-11-20 18:06:43.253590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.466 [2024-11-20 18:06:43.253742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.466 [2024-11-20 18:06:43.253749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.466 [2024-11-20 18:06:43.253755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.467 [2024-11-20 18:06:43.256164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.467 [2024-11-20 18:06:43.265484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.467 [2024-11-20 18:06:43.266036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.467 [2024-11-20 18:06:43.266067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.467 [2024-11-20 18:06:43.266076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.467 [2024-11-20 18:06:43.266250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.467 [2024-11-20 18:06:43.266404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.467 [2024-11-20 18:06:43.266414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.467 [2024-11-20 18:06:43.266421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.467 [2024-11-20 18:06:43.268824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.467 [2024-11-20 18:06:43.278146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.467 [2024-11-20 18:06:43.278596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.467 [2024-11-20 18:06:43.278628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.467 [2024-11-20 18:06:43.278637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.467 [2024-11-20 18:06:43.278801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.467 [2024-11-20 18:06:43.278954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.467 [2024-11-20 18:06:43.278961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.467 [2024-11-20 18:06:43.278966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.467 [2024-11-20 18:06:43.281377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.467 [2024-11-20 18:06:43.290847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.467 [2024-11-20 18:06:43.291401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.467 [2024-11-20 18:06:43.291433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.467 [2024-11-20 18:06:43.291442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.467 [2024-11-20 18:06:43.291607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.467 [2024-11-20 18:06:43.291759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.467 [2024-11-20 18:06:43.291766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.467 [2024-11-20 18:06:43.291772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.467 [2024-11-20 18:06:43.294182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.467 [2024-11-20 18:06:43.303501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.467 [2024-11-20 18:06:43.304064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.467 [2024-11-20 18:06:43.304096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.467 [2024-11-20 18:06:43.304105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.467 [2024-11-20 18:06:43.304278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.467 [2024-11-20 18:06:43.304431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.467 [2024-11-20 18:06:43.304438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.467 [2024-11-20 18:06:43.304444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.467 [2024-11-20 18:06:43.306847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.467 [2024-11-20 18:06:43.316178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.467 [2024-11-20 18:06:43.316707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.467 [2024-11-20 18:06:43.316739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.467 [2024-11-20 18:06:43.316748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.467 [2024-11-20 18:06:43.316913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.467 [2024-11-20 18:06:43.317065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.467 [2024-11-20 18:06:43.317072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.467 [2024-11-20 18:06:43.317078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.467 [2024-11-20 18:06:43.319490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.467 [2024-11-20 18:06:43.328830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.467 [2024-11-20 18:06:43.329289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.467 [2024-11-20 18:06:43.329321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.467 [2024-11-20 18:06:43.329330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.467 [2024-11-20 18:06:43.329497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.467 [2024-11-20 18:06:43.329649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.467 [2024-11-20 18:06:43.329656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.467 [2024-11-20 18:06:43.329662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.467 [2024-11-20 18:06:43.332072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.467 [2024-11-20 18:06:43.341537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.467 [2024-11-20 18:06:43.342029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.467 [2024-11-20 18:06:43.342044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.467 [2024-11-20 18:06:43.342050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.467 [2024-11-20 18:06:43.342206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.467 [2024-11-20 18:06:43.342356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.467 [2024-11-20 18:06:43.342362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.467 [2024-11-20 18:06:43.342367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.467 [2024-11-20 18:06:43.344764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.467 [2024-11-20 18:06:43.354252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.467 [2024-11-20 18:06:43.354688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.467 [2024-11-20 18:06:43.354701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.467 [2024-11-20 18:06:43.354707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.467 [2024-11-20 18:06:43.354860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.467 [2024-11-20 18:06:43.355010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.467 [2024-11-20 18:06:43.355017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.467 [2024-11-20 18:06:43.355022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.467 [2024-11-20 18:06:43.357423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.467 [2024-11-20 18:06:43.366882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.467 [2024-11-20 18:06:43.367486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.467 [2024-11-20 18:06:43.367518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.467 [2024-11-20 18:06:43.367527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.467 [2024-11-20 18:06:43.367692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.467 [2024-11-20 18:06:43.367844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.467 [2024-11-20 18:06:43.367851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.467 [2024-11-20 18:06:43.367856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.467 [2024-11-20 18:06:43.370260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.729 [2024-11-20 18:06:43.379585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.729 [2024-11-20 18:06:43.380079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.729 [2024-11-20 18:06:43.380094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.729 [2024-11-20 18:06:43.380101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.729 [2024-11-20 18:06:43.380254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.729 [2024-11-20 18:06:43.380405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.729 [2024-11-20 18:06:43.380411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.729 [2024-11-20 18:06:43.380416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.729 [2024-11-20 18:06:43.382814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.729 [2024-11-20 18:06:43.392274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.729 [2024-11-20 18:06:43.392755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.729 [2024-11-20 18:06:43.392768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.729 [2024-11-20 18:06:43.392774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.729 [2024-11-20 18:06:43.392922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.729 [2024-11-20 18:06:43.393071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.729 [2024-11-20 18:06:43.393078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.729 [2024-11-20 18:06:43.393087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.729 [2024-11-20 18:06:43.395485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.729 [2024-11-20 18:06:43.404937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.729 [2024-11-20 18:06:43.405318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.729 [2024-11-20 18:06:43.405331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.729 [2024-11-20 18:06:43.405338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.729 [2024-11-20 18:06:43.405486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.729 [2024-11-20 18:06:43.405635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.729 [2024-11-20 18:06:43.405642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.729 [2024-11-20 18:06:43.405647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.729 [2024-11-20 18:06:43.408044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.729 [2024-11-20 18:06:43.417516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.729 [2024-11-20 18:06:43.417980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.729 [2024-11-20 18:06:43.417993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.729 [2024-11-20 18:06:43.417998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.729 [2024-11-20 18:06:43.418147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.729 [2024-11-20 18:06:43.418302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.729 [2024-11-20 18:06:43.418309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.729 [2024-11-20 18:06:43.418314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.729 [2024-11-20 18:06:43.420712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.729 [2024-11-20 18:06:43.430182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.729 [2024-11-20 18:06:43.430693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.729 [2024-11-20 18:06:43.430725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.729 [2024-11-20 18:06:43.430734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.729 [2024-11-20 18:06:43.430899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.729 [2024-11-20 18:06:43.431051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.729 [2024-11-20 18:06:43.431057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.729 [2024-11-20 18:06:43.431063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.729 [2024-11-20 18:06:43.433473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.729 [2024-11-20 18:06:43.442799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.729 [2024-11-20 18:06:43.443489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.729 [2024-11-20 18:06:43.443521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.729 [2024-11-20 18:06:43.443530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.729 [2024-11-20 18:06:43.443695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.729 [2024-11-20 18:06:43.443847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.729 [2024-11-20 18:06:43.443854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.729 [2024-11-20 18:06:43.443860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.729 [2024-11-20 18:06:43.446267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.729 [2024-11-20 18:06:43.455467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.729 [2024-11-20 18:06:43.455934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.729 [2024-11-20 18:06:43.455966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.730 [2024-11-20 18:06:43.455975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.730 [2024-11-20 18:06:43.456141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.730 [2024-11-20 18:06:43.456299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.730 [2024-11-20 18:06:43.456307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.730 [2024-11-20 18:06:43.456313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.730 [2024-11-20 18:06:43.458729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.730 [2024-11-20 18:06:43.468054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.730 [2024-11-20 18:06:43.468565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.730 [2024-11-20 18:06:43.468580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.730 [2024-11-20 18:06:43.468586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.730 [2024-11-20 18:06:43.468736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.730 [2024-11-20 18:06:43.468885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.730 [2024-11-20 18:06:43.468891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.730 [2024-11-20 18:06:43.468897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.730 [2024-11-20 18:06:43.471300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.730 [2024-11-20 18:06:43.480765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.730 [2024-11-20 18:06:43.481288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.730 [2024-11-20 18:06:43.481320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.730 [2024-11-20 18:06:43.481329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.730 [2024-11-20 18:06:43.481496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.730 [2024-11-20 18:06:43.481654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.730 [2024-11-20 18:06:43.481662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.730 [2024-11-20 18:06:43.481668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.730 [2024-11-20 18:06:43.484077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.730 [2024-11-20 18:06:43.493396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.730 [2024-11-20 18:06:43.493773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.730 [2024-11-20 18:06:43.493789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.730 [2024-11-20 18:06:43.493795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.730 [2024-11-20 18:06:43.493944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.730 [2024-11-20 18:06:43.494093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.730 [2024-11-20 18:06:43.494100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.730 [2024-11-20 18:06:43.494105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.730 [2024-11-20 18:06:43.496508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.730 [2024-11-20 18:06:43.506126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.730 [2024-11-20 18:06:43.506709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.730 [2024-11-20 18:06:43.506741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.730 [2024-11-20 18:06:43.506750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.730 [2024-11-20 18:06:43.506914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.730 [2024-11-20 18:06:43.507067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.730 [2024-11-20 18:06:43.507074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.730 [2024-11-20 18:06:43.507080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.730 [2024-11-20 18:06:43.509487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.730 [2024-11-20 18:06:43.518830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.730 [2024-11-20 18:06:43.519383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.730 [2024-11-20 18:06:43.519416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.730 [2024-11-20 18:06:43.519425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.730 [2024-11-20 18:06:43.519590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.730 [2024-11-20 18:06:43.519742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.730 [2024-11-20 18:06:43.519750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.730 [2024-11-20 18:06:43.519756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.730 [2024-11-20 18:06:43.522177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.730 [2024-11-20 18:06:43.531526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.730 [2024-11-20 18:06:43.532113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.730 [2024-11-20 18:06:43.532145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.730 [2024-11-20 18:06:43.532154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.730 [2024-11-20 18:06:43.532326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.730 [2024-11-20 18:06:43.532479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.730 [2024-11-20 18:06:43.532486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.730 [2024-11-20 18:06:43.532492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.730 [2024-11-20 18:06:43.534898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.730 [2024-11-20 18:06:43.544241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.730 [2024-11-20 18:06:43.544736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.730 [2024-11-20 18:06:43.544752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.730 [2024-11-20 18:06:43.544757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.730 [2024-11-20 18:06:43.544907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.730 [2024-11-20 18:06:43.545056] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.730 [2024-11-20 18:06:43.545063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.730 [2024-11-20 18:06:43.545068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.730 [2024-11-20 18:06:43.547504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.730 [2024-11-20 18:06:43.556841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.730 [2024-11-20 18:06:43.557305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.730 [2024-11-20 18:06:43.557319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.730 [2024-11-20 18:06:43.557325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.730 [2024-11-20 18:06:43.557474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.730 [2024-11-20 18:06:43.557623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.730 [2024-11-20 18:06:43.557630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.730 [2024-11-20 18:06:43.557636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.730 [2024-11-20 18:06:43.560034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.730 [2024-11-20 18:06:43.569510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.730 [2024-11-20 18:06:43.569929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.730 [2024-11-20 18:06:43.569942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.730 [2024-11-20 18:06:43.569953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.730 [2024-11-20 18:06:43.570102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.730 [2024-11-20 18:06:43.570257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.730 [2024-11-20 18:06:43.570265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.730 [2024-11-20 18:06:43.570270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.730 [2024-11-20 18:06:43.572671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.730 [2024-11-20 18:06:43.582143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.730 [2024-11-20 18:06:43.582599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.730 [2024-11-20 18:06:43.582612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.731 [2024-11-20 18:06:43.582617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.731 [2024-11-20 18:06:43.582767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.731 [2024-11-20 18:06:43.582916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.731 [2024-11-20 18:06:43.582923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.731 [2024-11-20 18:06:43.582928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.731 [2024-11-20 18:06:43.585333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.731 [2024-11-20 18:06:43.594804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.731 [2024-11-20 18:06:43.595284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.731 [2024-11-20 18:06:43.595298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.731 [2024-11-20 18:06:43.595303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.731 [2024-11-20 18:06:43.595452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.731 [2024-11-20 18:06:43.595602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.731 [2024-11-20 18:06:43.595609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.731 [2024-11-20 18:06:43.595614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.731 [2024-11-20 18:06:43.598011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.731 [2024-11-20 18:06:43.607488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.731 [2024-11-20 18:06:43.607922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.731 [2024-11-20 18:06:43.607935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.731 [2024-11-20 18:06:43.607941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.731 [2024-11-20 18:06:43.608089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.731 [2024-11-20 18:06:43.608247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.731 [2024-11-20 18:06:43.608255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.731 [2024-11-20 18:06:43.608260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.731 [2024-11-20 18:06:43.610686] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.731 [2024-11-20 18:06:43.620171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.731 [2024-11-20 18:06:43.620656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.731 [2024-11-20 18:06:43.620669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.731 [2024-11-20 18:06:43.620675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.731 [2024-11-20 18:06:43.620824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.731 [2024-11-20 18:06:43.620973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.731 [2024-11-20 18:06:43.620979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.731 [2024-11-20 18:06:43.620984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.731 [2024-11-20 18:06:43.623390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.731 [2024-11-20 18:06:43.632875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.731 [2024-11-20 18:06:43.633462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.731 [2024-11-20 18:06:43.633494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.731 [2024-11-20 18:06:43.633503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.731 [2024-11-20 18:06:43.633668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.731 [2024-11-20 18:06:43.633820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.731 [2024-11-20 18:06:43.633827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.731 [2024-11-20 18:06:43.633833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.731 [2024-11-20 18:06:43.636241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.993 [2024-11-20 18:06:43.645568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.994 [2024-11-20 18:06:43.646018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.994 [2024-11-20 18:06:43.646034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.994 [2024-11-20 18:06:43.646040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.994 [2024-11-20 18:06:43.646193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.994 [2024-11-20 18:06:43.646344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.994 [2024-11-20 18:06:43.646351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.994 [2024-11-20 18:06:43.646358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.994 [2024-11-20 18:06:43.648756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.994 [2024-11-20 18:06:43.658225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.994 [2024-11-20 18:06:43.658670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.994 [2024-11-20 18:06:43.658683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.994 [2024-11-20 18:06:43.658689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.994 [2024-11-20 18:06:43.658837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.994 [2024-11-20 18:06:43.658986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.994 [2024-11-20 18:06:43.658993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.994 [2024-11-20 18:06:43.658998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.994 [2024-11-20 18:06:43.661403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.994 [2024-11-20 18:06:43.670867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.994 [2024-11-20 18:06:43.671446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.994 [2024-11-20 18:06:43.671479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.994 [2024-11-20 18:06:43.671488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.994 [2024-11-20 18:06:43.671653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.994 [2024-11-20 18:06:43.671805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.994 [2024-11-20 18:06:43.671812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.994 [2024-11-20 18:06:43.671818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.994 [2024-11-20 18:06:43.674297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.994 [2024-11-20 18:06:43.683491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.994 [2024-11-20 18:06:43.683945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.994 [2024-11-20 18:06:43.683960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.994 [2024-11-20 18:06:43.683966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.994 [2024-11-20 18:06:43.684115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.994 [2024-11-20 18:06:43.684269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.994 [2024-11-20 18:06:43.684276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.994 [2024-11-20 18:06:43.684281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.994 [2024-11-20 18:06:43.686680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.994 [2024-11-20 18:06:43.696146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.994 [2024-11-20 18:06:43.696709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.994 [2024-11-20 18:06:43.696742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.994 [2024-11-20 18:06:43.696754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.994 [2024-11-20 18:06:43.696919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.994 [2024-11-20 18:06:43.697071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.994 [2024-11-20 18:06:43.697079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.994 [2024-11-20 18:06:43.697084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.994 [2024-11-20 18:06:43.699491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.994 [2024-11-20 18:06:43.708816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.994 [2024-11-20 18:06:43.709313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.994 [2024-11-20 18:06:43.709329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.994 [2024-11-20 18:06:43.709335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.994 [2024-11-20 18:06:43.709485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.994 [2024-11-20 18:06:43.709635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.994 [2024-11-20 18:06:43.709642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.994 [2024-11-20 18:06:43.709647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.994 [2024-11-20 18:06:43.712046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.994 [2024-11-20 18:06:43.721522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.994 [2024-11-20 18:06:43.722107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.994 [2024-11-20 18:06:43.722139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.994 [2024-11-20 18:06:43.722148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.994 [2024-11-20 18:06:43.722319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.994 [2024-11-20 18:06:43.722472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.994 [2024-11-20 18:06:43.722479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.994 [2024-11-20 18:06:43.722485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.994 [2024-11-20 18:06:43.724886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.994 [2024-11-20 18:06:43.734224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.994 [2024-11-20 18:06:43.734792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.994 [2024-11-20 18:06:43.734823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.994 [2024-11-20 18:06:43.734832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.994 [2024-11-20 18:06:43.734997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.994 [2024-11-20 18:06:43.735150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.994 [2024-11-20 18:06:43.735167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.994 [2024-11-20 18:06:43.735173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.994 [2024-11-20 18:06:43.737576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.994 [2024-11-20 18:06:43.746900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.994 [2024-11-20 18:06:43.747367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.994 [2024-11-20 18:06:43.747384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.994 [2024-11-20 18:06:43.747391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.994 [2024-11-20 18:06:43.747540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.994 [2024-11-20 18:06:43.747689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.994 [2024-11-20 18:06:43.747696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.994 [2024-11-20 18:06:43.747701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.994 [2024-11-20 18:06:43.750098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.994 [2024-11-20 18:06:43.759562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.994 [2024-11-20 18:06:43.760046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.994 [2024-11-20 18:06:43.760059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.994 [2024-11-20 18:06:43.760064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.994 [2024-11-20 18:06:43.760217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.994 [2024-11-20 18:06:43.760366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.994 [2024-11-20 18:06:43.760373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.994 [2024-11-20 18:06:43.760379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.994 [2024-11-20 18:06:43.762775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.994 [2024-11-20 18:06:43.772241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.994 [2024-11-20 18:06:43.772564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.995 [2024-11-20 18:06:43.772579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.995 [2024-11-20 18:06:43.772585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.995 [2024-11-20 18:06:43.772734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.995 [2024-11-20 18:06:43.772883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.995 [2024-11-20 18:06:43.772890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.995 [2024-11-20 18:06:43.772895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.995 [2024-11-20 18:06:43.775298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.995 [2024-11-20 18:06:43.784898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.995 [2024-11-20 18:06:43.785309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.995 [2024-11-20 18:06:43.785339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.995 [2024-11-20 18:06:43.785348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.995 [2024-11-20 18:06:43.785514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.995 [2024-11-20 18:06:43.785666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.995 [2024-11-20 18:06:43.785673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.995 [2024-11-20 18:06:43.785679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.995 [2024-11-20 18:06:43.788088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.995 [2024-11-20 18:06:43.797557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.995 [2024-11-20 18:06:43.798148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.995 [2024-11-20 18:06:43.798186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.995 [2024-11-20 18:06:43.798196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.995 [2024-11-20 18:06:43.798362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.995 [2024-11-20 18:06:43.798514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.995 [2024-11-20 18:06:43.798521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.995 [2024-11-20 18:06:43.798526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.995 [2024-11-20 18:06:43.800930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.995 [2024-11-20 18:06:43.810254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.995 [2024-11-20 18:06:43.810749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.995 [2024-11-20 18:06:43.810765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.995 [2024-11-20 18:06:43.810771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.995 [2024-11-20 18:06:43.810920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.995 [2024-11-20 18:06:43.811069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.995 [2024-11-20 18:06:43.811076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.995 [2024-11-20 18:06:43.811081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.995 [2024-11-20 18:06:43.813484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.995 [2024-11-20 18:06:43.822955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.995 [2024-11-20 18:06:43.823578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.995 [2024-11-20 18:06:43.823611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.995 [2024-11-20 18:06:43.823619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.995 [2024-11-20 18:06:43.823788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.995 [2024-11-20 18:06:43.823941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.995 [2024-11-20 18:06:43.823948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.995 [2024-11-20 18:06:43.823954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.995 [2024-11-20 18:06:43.826363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.995 [2024-11-20 18:06:43.835554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.995 [2024-11-20 18:06:43.836011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.995 [2024-11-20 18:06:43.836027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.995 [2024-11-20 18:06:43.836033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.995 [2024-11-20 18:06:43.836186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.995 [2024-11-20 18:06:43.836337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.995 [2024-11-20 18:06:43.836343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.995 [2024-11-20 18:06:43.836349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.995 [2024-11-20 18:06:43.838748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.995 [2024-11-20 18:06:43.848280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.995 [2024-11-20 18:06:43.848866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.995 [2024-11-20 18:06:43.848898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.995 [2024-11-20 18:06:43.848906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.995 [2024-11-20 18:06:43.849071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.995 [2024-11-20 18:06:43.849229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.995 [2024-11-20 18:06:43.849237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.995 [2024-11-20 18:06:43.849242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.995 [2024-11-20 18:06:43.851646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.995 [2024-11-20 18:06:43.860973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.995 [2024-11-20 18:06:43.861446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.995 [2024-11-20 18:06:43.861478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.995 [2024-11-20 18:06:43.861487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.995 [2024-11-20 18:06:43.861652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.995 [2024-11-20 18:06:43.861804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.995 [2024-11-20 18:06:43.861811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.995 [2024-11-20 18:06:43.861821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.995 [2024-11-20 18:06:43.864229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.995 [2024-11-20 18:06:43.873553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.995 [2024-11-20 18:06:43.874081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.995 [2024-11-20 18:06:43.874097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.995 [2024-11-20 18:06:43.874103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.995 [2024-11-20 18:06:43.874257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.995 [2024-11-20 18:06:43.874407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.995 [2024-11-20 18:06:43.874414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.995 [2024-11-20 18:06:43.874419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.995 [2024-11-20 18:06:43.876816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.995 [2024-11-20 18:06:43.886138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.995 [2024-11-20 18:06:43.886494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.995 [2024-11-20 18:06:43.886508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.995 [2024-11-20 18:06:43.886514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.995 [2024-11-20 18:06:43.886662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.995 [2024-11-20 18:06:43.886812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.995 [2024-11-20 18:06:43.886818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.995 [2024-11-20 18:06:43.886823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.995 [2024-11-20 18:06:43.889221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:43.995 [2024-11-20 18:06:43.898822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:43.995 [2024-11-20 18:06:43.899275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:43.995 [2024-11-20 18:06:43.899289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:43.996 [2024-11-20 18:06:43.899294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:43.996 [2024-11-20 18:06:43.899443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:43.996 [2024-11-20 18:06:43.899592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:43.996 [2024-11-20 18:06:43.899599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:43.996 [2024-11-20 18:06:43.899604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:43.996 [2024-11-20 18:06:43.901999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.257 [2024-11-20 18:06:43.911463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.257 [2024-11-20 18:06:43.911944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.257 [2024-11-20 18:06:43.911960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.257 [2024-11-20 18:06:43.911966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.257 [2024-11-20 18:06:43.912114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.257 [2024-11-20 18:06:43.912267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.258 [2024-11-20 18:06:43.912274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.258 [2024-11-20 18:06:43.912279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.258 [2024-11-20 18:06:43.914674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.258 [2024-11-20 18:06:43.924145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.258 [2024-11-20 18:06:43.924592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.258 [2024-11-20 18:06:43.924605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.258 [2024-11-20 18:06:43.924611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.258 [2024-11-20 18:06:43.924760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.258 [2024-11-20 18:06:43.924909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.258 [2024-11-20 18:06:43.924915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.258 [2024-11-20 18:06:43.924920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.258 [2024-11-20 18:06:43.927320] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.258 [2024-11-20 18:06:43.936790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.258 [2024-11-20 18:06:43.937253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.258 [2024-11-20 18:06:43.937266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.258 [2024-11-20 18:06:43.937273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.258 [2024-11-20 18:06:43.937421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.258 [2024-11-20 18:06:43.937570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.258 [2024-11-20 18:06:43.937577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.258 [2024-11-20 18:06:43.937582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.258 [2024-11-20 18:06:43.939978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.258 [2024-11-20 18:06:43.949442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.258 [2024-11-20 18:06:43.949766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.258 [2024-11-20 18:06:43.949781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.258 [2024-11-20 18:06:43.949787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.258 [2024-11-20 18:06:43.949936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.258 [2024-11-20 18:06:43.950088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.258 [2024-11-20 18:06:43.950095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.258 [2024-11-20 18:06:43.950100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.258 [2024-11-20 18:06:43.952501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.258 [2024-11-20 18:06:43.962103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.258 [2024-11-20 18:06:43.962644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.258 [2024-11-20 18:06:43.962676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.258 [2024-11-20 18:06:43.962685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.258 [2024-11-20 18:06:43.962850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.258 [2024-11-20 18:06:43.963002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.258 [2024-11-20 18:06:43.963009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.258 [2024-11-20 18:06:43.963014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.258 [2024-11-20 18:06:43.965423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.258 [2024-11-20 18:06:43.974747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.258 [2024-11-20 18:06:43.975219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.258 [2024-11-20 18:06:43.975235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.258 [2024-11-20 18:06:43.975241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.258 [2024-11-20 18:06:43.975390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.258 [2024-11-20 18:06:43.975539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.258 [2024-11-20 18:06:43.975547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.258 [2024-11-20 18:06:43.975553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.258 [2024-11-20 18:06:43.977949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.258 [2024-11-20 18:06:43.987415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.258 [2024-11-20 18:06:43.987896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.258 [2024-11-20 18:06:43.987909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.258 [2024-11-20 18:06:43.987915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.258 [2024-11-20 18:06:43.988064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.258 [2024-11-20 18:06:43.988218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.258 [2024-11-20 18:06:43.988225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.258 [2024-11-20 18:06:43.988231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.258 [2024-11-20 18:06:43.990631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.258 [2024-11-20 18:06:44.000093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.258 [2024-11-20 18:06:44.000535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.258 [2024-11-20 18:06:44.000549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.258 [2024-11-20 18:06:44.000554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.258 [2024-11-20 18:06:44.000703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.258 [2024-11-20 18:06:44.000853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.258 [2024-11-20 18:06:44.000860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.258 [2024-11-20 18:06:44.000865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.258 [2024-11-20 18:06:44.003271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.258 [2024-11-20 18:06:44.012729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.258 [2024-11-20 18:06:44.013211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.258 [2024-11-20 18:06:44.013224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.258 [2024-11-20 18:06:44.013230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.258 [2024-11-20 18:06:44.013379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.258 [2024-11-20 18:06:44.013528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.259 [2024-11-20 18:06:44.013535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.259 [2024-11-20 18:06:44.013540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.259 [2024-11-20 18:06:44.015936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.259 [2024-11-20 18:06:44.025407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.259 [2024-11-20 18:06:44.025869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.259 [2024-11-20 18:06:44.025882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.259 [2024-11-20 18:06:44.025887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.259 [2024-11-20 18:06:44.026036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.259 [2024-11-20 18:06:44.026188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.259 [2024-11-20 18:06:44.026195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.259 [2024-11-20 18:06:44.026201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.259 [2024-11-20 18:06:44.028597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.259 [2024-11-20 18:06:44.038061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.259 [2024-11-20 18:06:44.038529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.259 [2024-11-20 18:06:44.038542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.259 [2024-11-20 18:06:44.038551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.259 [2024-11-20 18:06:44.038700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.259 [2024-11-20 18:06:44.038849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.259 [2024-11-20 18:06:44.038856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.259 [2024-11-20 18:06:44.038861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.259 [2024-11-20 18:06:44.041258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.259 [2024-11-20 18:06:44.050714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.259 [2024-11-20 18:06:44.051057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.259 [2024-11-20 18:06:44.051071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.259 [2024-11-20 18:06:44.051077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.259 [2024-11-20 18:06:44.051230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.259 [2024-11-20 18:06:44.051380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.259 [2024-11-20 18:06:44.051386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.259 [2024-11-20 18:06:44.051391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.259 [2024-11-20 18:06:44.053787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.259 [2024-11-20 18:06:44.063405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.259 [2024-11-20 18:06:44.063760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.259 [2024-11-20 18:06:44.063774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.259 [2024-11-20 18:06:44.063779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.259 [2024-11-20 18:06:44.063928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.259 [2024-11-20 18:06:44.064078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.259 [2024-11-20 18:06:44.064084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.259 [2024-11-20 18:06:44.064089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.259 [2024-11-20 18:06:44.066494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.259 [2024-11-20 18:06:44.076101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.259 [2024-11-20 18:06:44.076561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.259 [2024-11-20 18:06:44.076593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.259 [2024-11-20 18:06:44.076602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.259 [2024-11-20 18:06:44.076767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.259 [2024-11-20 18:06:44.076919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.259 [2024-11-20 18:06:44.076930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.259 [2024-11-20 18:06:44.076937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.259 [2024-11-20 18:06:44.079357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.259 [2024-11-20 18:06:44.088698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.259 [2024-11-20 18:06:44.089115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.259 [2024-11-20 18:06:44.089132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.259 [2024-11-20 18:06:44.089139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.259 [2024-11-20 18:06:44.089294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.259 [2024-11-20 18:06:44.089445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.259 [2024-11-20 18:06:44.089452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.259 [2024-11-20 18:06:44.089457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.259 [2024-11-20 18:06:44.091857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.259 [2024-11-20 18:06:44.101340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.259 [2024-11-20 18:06:44.101798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.259 [2024-11-20 18:06:44.101811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.259 [2024-11-20 18:06:44.101816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.259 [2024-11-20 18:06:44.101965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.259 [2024-11-20 18:06:44.102114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.259 [2024-11-20 18:06:44.102121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.259 [2024-11-20 18:06:44.102126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.259 [2024-11-20 18:06:44.104532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.259 [2024-11-20 18:06:44.114010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.259 [2024-11-20 18:06:44.114504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.259 [2024-11-20 18:06:44.114518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.259 [2024-11-20 18:06:44.114523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.259 [2024-11-20 18:06:44.114672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.260 [2024-11-20 18:06:44.114822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.260 [2024-11-20 18:06:44.114828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.260 [2024-11-20 18:06:44.114833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.260 [2024-11-20 18:06:44.117236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.260 [2024-11-20 18:06:44.126724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.260 [2024-11-20 18:06:44.127052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.260 [2024-11-20 18:06:44.127067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.260 [2024-11-20 18:06:44.127073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.260 [2024-11-20 18:06:44.127230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.260 [2024-11-20 18:06:44.127380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.260 [2024-11-20 18:06:44.127387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.260 [2024-11-20 18:06:44.127392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.260 [2024-11-20 18:06:44.129805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.260 [2024-11-20 18:06:44.139425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.260 [2024-11-20 18:06:44.139985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.260 [2024-11-20 18:06:44.140017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.260 [2024-11-20 18:06:44.140026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.260 [2024-11-20 18:06:44.140197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.260 [2024-11-20 18:06:44.140349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.260 [2024-11-20 18:06:44.140357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.260 [2024-11-20 18:06:44.140363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.260 [2024-11-20 18:06:44.142767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.260 [2024-11-20 18:06:44.152101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.260 [2024-11-20 18:06:44.152552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.260 [2024-11-20 18:06:44.152569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.260 [2024-11-20 18:06:44.152575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.260 [2024-11-20 18:06:44.152725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.260 [2024-11-20 18:06:44.152875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.260 [2024-11-20 18:06:44.152881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.260 [2024-11-20 18:06:44.152887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.260 [2024-11-20 18:06:44.155297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.260 5820.80 IOPS, 22.74 MiB/s [2024-11-20T17:06:44.176Z] [2024-11-20 18:06:44.164768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.260 [2024-11-20 18:06:44.165219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.260 [2024-11-20 18:06:44.165233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.260 [2024-11-20 18:06:44.165245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.260 [2024-11-20 18:06:44.165394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.260 [2024-11-20 18:06:44.165543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.260 [2024-11-20 18:06:44.165549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.260 [2024-11-20 18:06:44.165554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.260 [2024-11-20 18:06:44.167953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.521 [2024-11-20 18:06:44.177435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.521 [2024-11-20 18:06:44.177976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.522 [2024-11-20 18:06:44.178007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.522 [2024-11-20 18:06:44.178016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.522 [2024-11-20 18:06:44.178191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.522 [2024-11-20 18:06:44.178345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.522 [2024-11-20 18:06:44.178352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.522 [2024-11-20 18:06:44.178358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.522 [2024-11-20 18:06:44.180760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.522 [2024-11-20 18:06:44.190079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.522 [2024-11-20 18:06:44.190633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.522 [2024-11-20 18:06:44.190665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.522 [2024-11-20 18:06:44.190674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.522 [2024-11-20 18:06:44.190839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.522 [2024-11-20 18:06:44.190991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.522 [2024-11-20 18:06:44.190998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.522 [2024-11-20 18:06:44.191004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.522 [2024-11-20 18:06:44.193410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.522 [2024-11-20 18:06:44.202733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.522 [2024-11-20 18:06:44.203188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.522 [2024-11-20 18:06:44.203205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.522 [2024-11-20 18:06:44.203211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.522 [2024-11-20 18:06:44.203363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.522 [2024-11-20 18:06:44.203512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.522 [2024-11-20 18:06:44.203523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.522 [2024-11-20 18:06:44.203529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.522 [2024-11-20 18:06:44.205930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.522 [2024-11-20 18:06:44.215394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.522 [2024-11-20 18:06:44.215951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.522 [2024-11-20 18:06:44.215983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.522 [2024-11-20 18:06:44.215991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.522 [2024-11-20 18:06:44.216156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.522 [2024-11-20 18:06:44.216316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.522 [2024-11-20 18:06:44.216324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.522 [2024-11-20 18:06:44.216330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.522 [2024-11-20 18:06:44.218734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.522 [2024-11-20 18:06:44.228072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.522 [2024-11-20 18:06:44.228545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.522 [2024-11-20 18:06:44.228577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.522 [2024-11-20 18:06:44.228586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.522 [2024-11-20 18:06:44.228751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.522 [2024-11-20 18:06:44.228903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.522 [2024-11-20 18:06:44.228910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.522 [2024-11-20 18:06:44.228916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.522 [2024-11-20 18:06:44.231331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.522 [2024-11-20 18:06:44.240649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.522 [2024-11-20 18:06:44.241229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.522 [2024-11-20 18:06:44.241261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.522 [2024-11-20 18:06:44.241270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.522 [2024-11-20 18:06:44.241439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.522 [2024-11-20 18:06:44.241591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.522 [2024-11-20 18:06:44.241598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.522 [2024-11-20 18:06:44.241604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.522 [2024-11-20 18:06:44.244010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.522 [2024-11-20 18:06:44.253334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.522 [2024-11-20 18:06:44.253874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.522 [2024-11-20 18:06:44.253906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.522 [2024-11-20 18:06:44.253915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.522 [2024-11-20 18:06:44.254079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.522 [2024-11-20 18:06:44.254239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.522 [2024-11-20 18:06:44.254247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.522 [2024-11-20 18:06:44.254252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.522 [2024-11-20 18:06:44.256655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.522 [2024-11-20 18:06:44.265972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.522 [2024-11-20 18:06:44.266430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.522 [2024-11-20 18:06:44.266446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.522 [2024-11-20 18:06:44.266452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.522 [2024-11-20 18:06:44.266600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.522 [2024-11-20 18:06:44.266750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.522 [2024-11-20 18:06:44.266756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.522 [2024-11-20 18:06:44.266761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.522 [2024-11-20 18:06:44.269161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.522 [2024-11-20 18:06:44.278613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.522 [2024-11-20 18:06:44.279087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.522 [2024-11-20 18:06:44.279100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.522 [2024-11-20 18:06:44.279106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.522 [2024-11-20 18:06:44.279259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.522 [2024-11-20 18:06:44.279409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.522 [2024-11-20 18:06:44.279416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.522 [2024-11-20 18:06:44.279421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.522 [2024-11-20 18:06:44.281816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.522 [2024-11-20 18:06:44.291272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.522 [2024-11-20 18:06:44.291727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.522 [2024-11-20 18:06:44.291739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.522 [2024-11-20 18:06:44.291745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.522 [2024-11-20 18:06:44.291897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.522 [2024-11-20 18:06:44.292047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.522 [2024-11-20 18:06:44.292053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.522 [2024-11-20 18:06:44.292058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.522 [2024-11-20 18:06:44.294460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.522 [2024-11-20 18:06:44.303911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.522 [2024-11-20 18:06:44.304484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.522 [2024-11-20 18:06:44.304515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.522 [2024-11-20 18:06:44.304524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.522 [2024-11-20 18:06:44.304689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.522 [2024-11-20 18:06:44.304841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.522 [2024-11-20 18:06:44.304848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.522 [2024-11-20 18:06:44.304854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.522 [2024-11-20 18:06:44.307264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.522 [2024-11-20 18:06:44.316618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.522 [2024-11-20 18:06:44.317208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.522 [2024-11-20 18:06:44.317240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.522 [2024-11-20 18:06:44.317248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.522 [2024-11-20 18:06:44.317414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.522 [2024-11-20 18:06:44.317567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.522 [2024-11-20 18:06:44.317573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.522 [2024-11-20 18:06:44.317581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.522 [2024-11-20 18:06:44.320002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.522 [2024-11-20 18:06:44.329327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.522 [2024-11-20 18:06:44.329870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.522 [2024-11-20 18:06:44.329902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.522 [2024-11-20 18:06:44.329911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.522 [2024-11-20 18:06:44.330075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.522 [2024-11-20 18:06:44.330242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.522 [2024-11-20 18:06:44.330250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.522 [2024-11-20 18:06:44.330260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.522 [2024-11-20 18:06:44.332664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.522 [2024-11-20 18:06:44.341985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.522 [2024-11-20 18:06:44.342546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.522 [2024-11-20 18:06:44.342578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.522 [2024-11-20 18:06:44.342587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.522 [2024-11-20 18:06:44.342753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.522 [2024-11-20 18:06:44.342905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.522 [2024-11-20 18:06:44.342912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.522 [2024-11-20 18:06:44.342918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.522 [2024-11-20 18:06:44.345326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.522 [2024-11-20 18:06:44.354644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.523 [2024-11-20 18:06:44.355063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.523 [2024-11-20 18:06:44.355079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.523 [2024-11-20 18:06:44.355084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.523 [2024-11-20 18:06:44.355239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.523 [2024-11-20 18:06:44.355390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.523 [2024-11-20 18:06:44.355396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.523 [2024-11-20 18:06:44.355402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.523 [2024-11-20 18:06:44.357800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.523 [2024-11-20 18:06:44.367257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.523 [2024-11-20 18:06:44.367791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.523 [2024-11-20 18:06:44.367823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.523 [2024-11-20 18:06:44.367832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.523 [2024-11-20 18:06:44.367997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.523 [2024-11-20 18:06:44.368149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.523 [2024-11-20 18:06:44.368156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.523 [2024-11-20 18:06:44.368169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.523 [2024-11-20 18:06:44.370572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.523 [2024-11-20 18:06:44.379889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.523 [2024-11-20 18:06:44.380485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.523 [2024-11-20 18:06:44.380521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.523 [2024-11-20 18:06:44.380529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.523 [2024-11-20 18:06:44.380694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.523 [2024-11-20 18:06:44.380846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.523 [2024-11-20 18:06:44.380853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.523 [2024-11-20 18:06:44.380858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.523 [2024-11-20 18:06:44.383268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.523 [2024-11-20 18:06:44.392584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.523 [2024-11-20 18:06:44.393114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.523 [2024-11-20 18:06:44.393146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.523 [2024-11-20 18:06:44.393155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.523 [2024-11-20 18:06:44.393328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.523 [2024-11-20 18:06:44.393481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.523 [2024-11-20 18:06:44.393488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.523 [2024-11-20 18:06:44.393494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.523 [2024-11-20 18:06:44.395895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.523 [2024-11-20 18:06:44.405214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.523 [2024-11-20 18:06:44.405780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.523 [2024-11-20 18:06:44.405812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.523 [2024-11-20 18:06:44.405820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.523 [2024-11-20 18:06:44.405986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.523 [2024-11-20 18:06:44.406138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.523 [2024-11-20 18:06:44.406145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.523 [2024-11-20 18:06:44.406151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.523 [2024-11-20 18:06:44.408560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.523 [2024-11-20 18:06:44.417880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.523 [2024-11-20 18:06:44.418455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.523 [2024-11-20 18:06:44.418486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.523 [2024-11-20 18:06:44.418495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.523 [2024-11-20 18:06:44.418660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.523 [2024-11-20 18:06:44.418816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.523 [2024-11-20 18:06:44.418824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.523 [2024-11-20 18:06:44.418829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.523 [2024-11-20 18:06:44.421250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.523 [2024-11-20 18:06:44.430572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.523 [2024-11-20 18:06:44.431176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.523 [2024-11-20 18:06:44.431207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.523 [2024-11-20 18:06:44.431216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.523 [2024-11-20 18:06:44.431381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.523 [2024-11-20 18:06:44.431533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.523 [2024-11-20 18:06:44.431540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.523 [2024-11-20 18:06:44.431546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.785 [2024-11-20 18:06:44.433952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.785 [2024-11-20 18:06:44.443280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.785 [2024-11-20 18:06:44.443871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.785 [2024-11-20 18:06:44.443903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.785 [2024-11-20 18:06:44.443912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.785 [2024-11-20 18:06:44.444077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.785 [2024-11-20 18:06:44.444237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.785 [2024-11-20 18:06:44.444244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.785 [2024-11-20 18:06:44.444250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.785 [2024-11-20 18:06:44.446652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.785 [2024-11-20 18:06:44.455971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.785 [2024-11-20 18:06:44.456576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.785 [2024-11-20 18:06:44.456608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.785 [2024-11-20 18:06:44.456617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.785 [2024-11-20 18:06:44.456782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.785 [2024-11-20 18:06:44.456934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.785 [2024-11-20 18:06:44.456941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.785 [2024-11-20 18:06:44.456947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.785 [2024-11-20 18:06:44.459360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.785 [2024-11-20 18:06:44.468677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.785 [2024-11-20 18:06:44.469281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.785 [2024-11-20 18:06:44.469313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.785 [2024-11-20 18:06:44.469322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.786 [2024-11-20 18:06:44.469487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.786 [2024-11-20 18:06:44.469638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.786 [2024-11-20 18:06:44.469646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.786 [2024-11-20 18:06:44.469652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.786 [2024-11-20 18:06:44.472060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.786 [2024-11-20 18:06:44.481382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.786 [2024-11-20 18:06:44.481973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.786 [2024-11-20 18:06:44.482005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.786 [2024-11-20 18:06:44.482014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.786 [2024-11-20 18:06:44.482186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.786 [2024-11-20 18:06:44.482339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.786 [2024-11-20 18:06:44.482346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.786 [2024-11-20 18:06:44.482352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.786 [2024-11-20 18:06:44.484753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.786 [2024-11-20 18:06:44.494069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.786 [2024-11-20 18:06:44.494645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.786 [2024-11-20 18:06:44.494677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.786 [2024-11-20 18:06:44.494685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.786 [2024-11-20 18:06:44.494850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.786 [2024-11-20 18:06:44.495002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.786 [2024-11-20 18:06:44.495010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.786 [2024-11-20 18:06:44.495016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.786 [2024-11-20 18:06:44.497423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.786 [2024-11-20 18:06:44.506748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.786 [2024-11-20 18:06:44.507278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.786 [2024-11-20 18:06:44.507310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.786 [2024-11-20 18:06:44.507322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.786 [2024-11-20 18:06:44.507487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.786 [2024-11-20 18:06:44.507638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.786 [2024-11-20 18:06:44.507645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.786 [2024-11-20 18:06:44.507651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.786 [2024-11-20 18:06:44.510058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.786 [2024-11-20 18:06:44.519381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.786 [2024-11-20 18:06:44.519873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.786 [2024-11-20 18:06:44.519888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.786 [2024-11-20 18:06:44.519894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.786 [2024-11-20 18:06:44.520043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.786 [2024-11-20 18:06:44.520206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.786 [2024-11-20 18:06:44.520214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.786 [2024-11-20 18:06:44.520219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.786 [2024-11-20 18:06:44.522621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.786 [2024-11-20 18:06:44.532077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.786 [2024-11-20 18:06:44.532547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.786 [2024-11-20 18:06:44.532561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.786 [2024-11-20 18:06:44.532567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.786 [2024-11-20 18:06:44.532716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.786 [2024-11-20 18:06:44.532865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.786 [2024-11-20 18:06:44.532871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.786 [2024-11-20 18:06:44.532876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.786 [2024-11-20 18:06:44.535275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.786 [2024-11-20 18:06:44.544727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.786 [2024-11-20 18:06:44.545166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.786 [2024-11-20 18:06:44.545180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.786 [2024-11-20 18:06:44.545185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.786 [2024-11-20 18:06:44.545334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.786 [2024-11-20 18:06:44.545484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.786 [2024-11-20 18:06:44.545493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.786 [2024-11-20 18:06:44.545499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.786 [2024-11-20 18:06:44.547897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.786 [2024-11-20 18:06:44.557351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.786 [2024-11-20 18:06:44.557785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.786 [2024-11-20 18:06:44.557797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.786 [2024-11-20 18:06:44.557802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.786 [2024-11-20 18:06:44.557951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.786 [2024-11-20 18:06:44.558100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.786 [2024-11-20 18:06:44.558106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.786 [2024-11-20 18:06:44.558112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.786 [2024-11-20 18:06:44.560513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.786 [2024-11-20 18:06:44.569964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.786 [2024-11-20 18:06:44.570509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.786 [2024-11-20 18:06:44.570540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.786 [2024-11-20 18:06:44.570549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.786 [2024-11-20 18:06:44.570714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.786 [2024-11-20 18:06:44.570866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.786 [2024-11-20 18:06:44.570873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.786 [2024-11-20 18:06:44.570878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.786 [2024-11-20 18:06:44.573289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.786 [2024-11-20 18:06:44.582607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.786 [2024-11-20 18:06:44.583056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.786 [2024-11-20 18:06:44.583088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.786 [2024-11-20 18:06:44.583097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.786 [2024-11-20 18:06:44.583272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.786 [2024-11-20 18:06:44.583425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.786 [2024-11-20 18:06:44.583432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.786 [2024-11-20 18:06:44.583438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.786 [2024-11-20 18:06:44.585838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.786 [2024-11-20 18:06:44.595306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.786 [2024-11-20 18:06:44.595910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.786 [2024-11-20 18:06:44.595942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.786 [2024-11-20 18:06:44.595951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.786 [2024-11-20 18:06:44.596118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.787 [2024-11-20 18:06:44.596277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.787 [2024-11-20 18:06:44.596285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.787 [2024-11-20 18:06:44.596291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.787 [2024-11-20 18:06:44.598691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.787 [2024-11-20 18:06:44.608008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.787 [2024-11-20 18:06:44.608555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.787 [2024-11-20 18:06:44.608587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.787 [2024-11-20 18:06:44.608596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.787 [2024-11-20 18:06:44.608760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.787 [2024-11-20 18:06:44.608912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.787 [2024-11-20 18:06:44.608919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.787 [2024-11-20 18:06:44.608925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.787 [2024-11-20 18:06:44.611335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.787 [2024-11-20 18:06:44.620665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.787 [2024-11-20 18:06:44.621157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.787 [2024-11-20 18:06:44.621176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.787 [2024-11-20 18:06:44.621182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.787 [2024-11-20 18:06:44.621332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.787 [2024-11-20 18:06:44.621481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.787 [2024-11-20 18:06:44.621487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.787 [2024-11-20 18:06:44.621493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.787 [2024-11-20 18:06:44.623890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.787 [2024-11-20 18:06:44.633355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.787 [2024-11-20 18:06:44.633851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.787 [2024-11-20 18:06:44.633865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.787 [2024-11-20 18:06:44.633874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.787 [2024-11-20 18:06:44.634023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.787 [2024-11-20 18:06:44.634178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.787 [2024-11-20 18:06:44.634186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.787 [2024-11-20 18:06:44.634191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.787 [2024-11-20 18:06:44.636589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.787 [2024-11-20 18:06:44.646042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.787 [2024-11-20 18:06:44.646512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.787 [2024-11-20 18:06:44.646525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.787 [2024-11-20 18:06:44.646530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.787 [2024-11-20 18:06:44.646679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.787 [2024-11-20 18:06:44.646829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.787 [2024-11-20 18:06:44.646835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.787 [2024-11-20 18:06:44.646840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.787 [2024-11-20 18:06:44.649241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.787 [2024-11-20 18:06:44.658694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.787 [2024-11-20 18:06:44.659178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.787 [2024-11-20 18:06:44.659191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.787 [2024-11-20 18:06:44.659197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.787 [2024-11-20 18:06:44.659345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.787 [2024-11-20 18:06:44.659494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.787 [2024-11-20 18:06:44.659501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.787 [2024-11-20 18:06:44.659506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.787 [2024-11-20 18:06:44.661902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.787 [2024-11-20 18:06:44.671354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.787 [2024-11-20 18:06:44.671890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.787 [2024-11-20 18:06:44.671922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.787 [2024-11-20 18:06:44.671931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.787 [2024-11-20 18:06:44.672095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.787 [2024-11-20 18:06:44.672255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.787 [2024-11-20 18:06:44.672263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.787 [2024-11-20 18:06:44.672272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.787 [2024-11-20 18:06:44.674676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.787 [2024-11-20 18:06:44.684003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.787 [2024-11-20 18:06:44.684571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:44.787 [2024-11-20 18:06:44.684603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:44.787 [2024-11-20 18:06:44.684612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:44.787 [2024-11-20 18:06:44.684777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:44.787 [2024-11-20 18:06:44.684929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:44.787 [2024-11-20 18:06:44.684936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:44.787 [2024-11-20 18:06:44.684942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:44.787 [2024-11-20 18:06:44.687350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:44.787 [2024-11-20 18:06:44.696672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:44.787 [2024-11-20 18:06:44.697135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.049 [2024-11-20 18:06:44.697150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.049 [2024-11-20 18:06:44.697162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.049 [2024-11-20 18:06:44.697312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.049 [2024-11-20 18:06:44.697463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.049 [2024-11-20 18:06:44.697469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.049 [2024-11-20 18:06:44.697475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.049 [2024-11-20 18:06:44.699946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.049 [2024-11-20 18:06:44.709272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.049 [2024-11-20 18:06:44.709745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.049 [2024-11-20 18:06:44.709759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.049 [2024-11-20 18:06:44.709764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.049 [2024-11-20 18:06:44.709913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.049 [2024-11-20 18:06:44.710062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.049 [2024-11-20 18:06:44.710069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.049 [2024-11-20 18:06:44.710075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.049 [2024-11-20 18:06:44.712474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.049 [2024-11-20 18:06:44.721937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.049 [2024-11-20 18:06:44.722511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.049 [2024-11-20 18:06:44.722544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.049 [2024-11-20 18:06:44.722552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.049 [2024-11-20 18:06:44.722717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.049 [2024-11-20 18:06:44.722870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.049 [2024-11-20 18:06:44.722877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.049 [2024-11-20 18:06:44.722883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.049 [2024-11-20 18:06:44.725294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.049 [2024-11-20 18:06:44.734623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.049 [2024-11-20 18:06:44.735216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.049 [2024-11-20 18:06:44.735248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.049 [2024-11-20 18:06:44.735257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.049 [2024-11-20 18:06:44.735424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.049 [2024-11-20 18:06:44.735577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.049 [2024-11-20 18:06:44.735584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.049 [2024-11-20 18:06:44.735590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.049 [2024-11-20 18:06:44.738001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.049 [2024-11-20 18:06:44.747330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.049 [2024-11-20 18:06:44.747874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.049 [2024-11-20 18:06:44.747906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.049 [2024-11-20 18:06:44.747914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.049 [2024-11-20 18:06:44.748079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.049 [2024-11-20 18:06:44.748238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.049 [2024-11-20 18:06:44.748246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.049 [2024-11-20 18:06:44.748252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.049 [2024-11-20 18:06:44.750656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.049 [2024-11-20 18:06:44.759974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.049 [2024-11-20 18:06:44.760564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.049 [2024-11-20 18:06:44.760597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.049 [2024-11-20 18:06:44.760605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.049 [2024-11-20 18:06:44.760774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.049 [2024-11-20 18:06:44.760927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.049 [2024-11-20 18:06:44.760933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.049 [2024-11-20 18:06:44.760939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.049 [2024-11-20 18:06:44.763348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.049 [2024-11-20 18:06:44.772669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.049 [2024-11-20 18:06:44.773261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.049 [2024-11-20 18:06:44.773293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.049 [2024-11-20 18:06:44.773302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.049 [2024-11-20 18:06:44.773469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.049 [2024-11-20 18:06:44.773621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.049 [2024-11-20 18:06:44.773628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.049 [2024-11-20 18:06:44.773634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.049 [2024-11-20 18:06:44.776042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2935802 Killed "${NVMF_APP[@]}" "$@" 00:39:45.049 18:06:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:39:45.049 18:06:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:39:45.049 18:06:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:45.049 18:06:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:45.049 18:06:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:45.049 [2024-11-20 18:06:44.785366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.049 [2024-11-20 18:06:44.785965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.049 [2024-11-20 18:06:44.785998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.050 [2024-11-20 18:06:44.786007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.050 [2024-11-20 18:06:44.786177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.050 [2024-11-20 18:06:44.786330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.050 [2024-11-20 18:06:44.786337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.050 [2024-11-20 18:06:44.786343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.050 [2024-11-20 18:06:44.788745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.050 18:06:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=2937472 00:39:45.050 18:06:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 2937472 00:39:45.050 18:06:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:45.050 18:06:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2937472 ']' 00:39:45.050 18:06:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:45.050 18:06:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:45.050 18:06:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:45.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:45.050 18:06:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:45.050 18:06:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:45.050 [2024-11-20 18:06:44.798067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.050 [2024-11-20 18:06:44.798566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.050 [2024-11-20 18:06:44.798582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.050 [2024-11-20 18:06:44.798587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.050 [2024-11-20 18:06:44.798737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.050 [2024-11-20 18:06:44.798886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.050 [2024-11-20 18:06:44.798893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.050 [2024-11-20 18:06:44.798898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.050 [2024-11-20 18:06:44.801298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.050 [2024-11-20 18:06:44.810755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.050 [2024-11-20 18:06:44.811264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.050 [2024-11-20 18:06:44.811295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.050 [2024-11-20 18:06:44.811305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.050 [2024-11-20 18:06:44.811472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.050 [2024-11-20 18:06:44.811624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.050 [2024-11-20 18:06:44.811631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.050 [2024-11-20 18:06:44.811637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.050 [2024-11-20 18:06:44.814043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.050 [2024-11-20 18:06:44.823378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.050 [2024-11-20 18:06:44.823973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.050 [2024-11-20 18:06:44.824005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.050 [2024-11-20 18:06:44.824014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.050 [2024-11-20 18:06:44.824185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.050 [2024-11-20 18:06:44.824338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.050 [2024-11-20 18:06:44.824345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.050 [2024-11-20 18:06:44.824358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.050 [2024-11-20 18:06:44.826762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.050 [2024-11-20 18:06:44.836099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.050 [2024-11-20 18:06:44.836429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.050 [2024-11-20 18:06:44.836445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.050 [2024-11-20 18:06:44.836451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.050 [2024-11-20 18:06:44.836600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.050 [2024-11-20 18:06:44.836750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.050 [2024-11-20 18:06:44.836756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.050 [2024-11-20 18:06:44.836761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.050 [2024-11-20 18:06:44.839164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.050 [2024-11-20 18:06:44.840596] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:39:45.050 [2024-11-20 18:06:44.840642] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:45.050 [2024-11-20 18:06:44.848766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.050 [2024-11-20 18:06:44.849098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.050 [2024-11-20 18:06:44.849112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.050 [2024-11-20 18:06:44.849118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.050 [2024-11-20 18:06:44.849270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.050 [2024-11-20 18:06:44.849420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.050 [2024-11-20 18:06:44.849428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.050 [2024-11-20 18:06:44.849433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.050 [2024-11-20 18:06:44.851831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.050 [2024-11-20 18:06:44.861431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.050 [2024-11-20 18:06:44.861921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.050 [2024-11-20 18:06:44.861934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.050 [2024-11-20 18:06:44.861940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.050 [2024-11-20 18:06:44.862088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.050 [2024-11-20 18:06:44.862242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.050 [2024-11-20 18:06:44.862249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.050 [2024-11-20 18:06:44.862257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.050 [2024-11-20 18:06:44.864653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.050 [2024-11-20 18:06:44.874114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.050 [2024-11-20 18:06:44.874714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.050 [2024-11-20 18:06:44.874745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.050 [2024-11-20 18:06:44.874754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.050 [2024-11-20 18:06:44.874921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.050 [2024-11-20 18:06:44.875074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.050 [2024-11-20 18:06:44.875081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.050 [2024-11-20 18:06:44.875086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.050 [2024-11-20 18:06:44.877496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.050 [2024-11-20 18:06:44.886762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.050 [2024-11-20 18:06:44.887245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.050 [2024-11-20 18:06:44.887277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.050 [2024-11-20 18:06:44.887286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.050 [2024-11-20 18:06:44.887451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.050 [2024-11-20 18:06:44.887603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.050 [2024-11-20 18:06:44.887610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.050 [2024-11-20 18:06:44.887616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.050 [2024-11-20 18:06:44.890025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.050 [2024-11-20 18:06:44.899353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.050 [2024-11-20 18:06:44.899956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.051 [2024-11-20 18:06:44.899988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.051 [2024-11-20 18:06:44.899997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.051 [2024-11-20 18:06:44.900168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.051 [2024-11-20 18:06:44.900321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.051 [2024-11-20 18:06:44.900328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.051 [2024-11-20 18:06:44.900334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.051 [2024-11-20 18:06:44.902736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.051 [2024-11-20 18:06:44.912059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.051 [2024-11-20 18:06:44.912621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.051 [2024-11-20 18:06:44.912656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.051 [2024-11-20 18:06:44.912665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.051 [2024-11-20 18:06:44.912830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.051 [2024-11-20 18:06:44.912982] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.051 [2024-11-20 18:06:44.912989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.051 [2024-11-20 18:06:44.912995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.051 [2024-11-20 18:06:44.915401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.051 [2024-11-20 18:06:44.920570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:45.051 [2024-11-20 18:06:44.924738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.051 [2024-11-20 18:06:44.925275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.051 [2024-11-20 18:06:44.925307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.051 [2024-11-20 18:06:44.925317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.051 [2024-11-20 18:06:44.925484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.051 [2024-11-20 18:06:44.925636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.051 [2024-11-20 18:06:44.925643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.051 [2024-11-20 18:06:44.925650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.051 [2024-11-20 18:06:44.928062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.051 [2024-11-20 18:06:44.937411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.051 [2024-11-20 18:06:44.937941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.051 [2024-11-20 18:06:44.937958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.051 [2024-11-20 18:06:44.937964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.051 [2024-11-20 18:06:44.938114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.051 [2024-11-20 18:06:44.938273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.051 [2024-11-20 18:06:44.938280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.051 [2024-11-20 18:06:44.938286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.051 [2024-11-20 18:06:44.940689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.051 [2024-11-20 18:06:44.948651] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:45.051 [2024-11-20 18:06:44.948679] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:45.051 [2024-11-20 18:06:44.948685] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:45.051 [2024-11-20 18:06:44.948689] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:45.051 [2024-11-20 18:06:44.948694] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:45.051 [2024-11-20 18:06:44.948830] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:45.051 [2024-11-20 18:06:44.948987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:45.051 [2024-11-20 18:06:44.948990] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:39:45.051 [2024-11-20 18:06:44.950019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.051 [2024-11-20 18:06:44.950606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.051 [2024-11-20 18:06:44.950621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.051 [2024-11-20 18:06:44.950627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.051 [2024-11-20 18:06:44.950777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.051 [2024-11-20 18:06:44.950926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.051 [2024-11-20 18:06:44.950933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.051 [2024-11-20 18:06:44.950938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.051 [2024-11-20 18:06:44.953342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.313 [2024-11-20 18:06:44.962667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.313 [2024-11-20 18:06:44.963061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-11-20 18:06:44.963076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.313 [2024-11-20 18:06:44.963081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.313 [2024-11-20 18:06:44.963236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.313 [2024-11-20 18:06:44.963386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.313 [2024-11-20 18:06:44.963393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.313 [2024-11-20 18:06:44.963398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.313 [2024-11-20 18:06:44.965797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.313 [2024-11-20 18:06:44.975268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.313 [2024-11-20 18:06:44.975653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-11-20 18:06:44.975668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.313 [2024-11-20 18:06:44.975674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.313 [2024-11-20 18:06:44.975822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.313 [2024-11-20 18:06:44.975972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.313 [2024-11-20 18:06:44.975979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.313 [2024-11-20 18:06:44.975984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.313 [2024-11-20 18:06:44.978384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.313 [2024-11-20 18:06:44.987847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.313 [2024-11-20 18:06:44.988431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-11-20 18:06:44.988469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.313 [2024-11-20 18:06:44.988479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.313 [2024-11-20 18:06:44.988651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.313 [2024-11-20 18:06:44.988804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.313 [2024-11-20 18:06:44.988810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.313 [2024-11-20 18:06:44.988816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.313 [2024-11-20 18:06:44.991224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.313 [2024-11-20 18:06:45.000553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.313 [2024-11-20 18:06:45.001175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-11-20 18:06:45.001208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.313 [2024-11-20 18:06:45.001217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.313 [2024-11-20 18:06:45.001385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.313 [2024-11-20 18:06:45.001538] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.313 [2024-11-20 18:06:45.001544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.313 [2024-11-20 18:06:45.001550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.313 [2024-11-20 18:06:45.003953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.313 [2024-11-20 18:06:45.013134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.313 [2024-11-20 18:06:45.013743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-11-20 18:06:45.013775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.313 [2024-11-20 18:06:45.013784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.313 [2024-11-20 18:06:45.013949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.313 [2024-11-20 18:06:45.014101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.313 [2024-11-20 18:06:45.014108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.313 [2024-11-20 18:06:45.014114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.313 [2024-11-20 18:06:45.016523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.313 [2024-11-20 18:06:45.025719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.313 [2024-11-20 18:06:45.026261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.313 [2024-11-20 18:06:45.026293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.313 [2024-11-20 18:06:45.026302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.313 [2024-11-20 18:06:45.026471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.314 [2024-11-20 18:06:45.026623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.314 [2024-11-20 18:06:45.026630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.314 [2024-11-20 18:06:45.026637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.314 [2024-11-20 18:06:45.029044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.314 [2024-11-20 18:06:45.038382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.314 [2024-11-20 18:06:45.038990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-11-20 18:06:45.039021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.314 [2024-11-20 18:06:45.039031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.314 [2024-11-20 18:06:45.039201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.314 [2024-11-20 18:06:45.039354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.314 [2024-11-20 18:06:45.039361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.314 [2024-11-20 18:06:45.039368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.314 [2024-11-20 18:06:45.041769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.314 [2024-11-20 18:06:45.050955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.314 [2024-11-20 18:06:45.051496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-11-20 18:06:45.051529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.314 [2024-11-20 18:06:45.051538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.314 [2024-11-20 18:06:45.051703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.314 [2024-11-20 18:06:45.051856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.314 [2024-11-20 18:06:45.051863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.314 [2024-11-20 18:06:45.051868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.314 [2024-11-20 18:06:45.054277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.314 [2024-11-20 18:06:45.063598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.314 [2024-11-20 18:06:45.064244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-11-20 18:06:45.064276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.314 [2024-11-20 18:06:45.064285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.314 [2024-11-20 18:06:45.064452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.314 [2024-11-20 18:06:45.064605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.314 [2024-11-20 18:06:45.064612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.314 [2024-11-20 18:06:45.064618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.314 [2024-11-20 18:06:45.067028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.314 [2024-11-20 18:06:45.076216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.314 [2024-11-20 18:06:45.076729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-11-20 18:06:45.076745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.314 [2024-11-20 18:06:45.076751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.314 [2024-11-20 18:06:45.076900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.314 [2024-11-20 18:06:45.077049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.314 [2024-11-20 18:06:45.077056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.314 [2024-11-20 18:06:45.077061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.314 [2024-11-20 18:06:45.079464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.314 [2024-11-20 18:06:45.088919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.314 [2024-11-20 18:06:45.089523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-11-20 18:06:45.089556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.314 [2024-11-20 18:06:45.089565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.314 [2024-11-20 18:06:45.089730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.314 [2024-11-20 18:06:45.089882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.314 [2024-11-20 18:06:45.089890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.314 [2024-11-20 18:06:45.089895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.314 [2024-11-20 18:06:45.092304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.314 [2024-11-20 18:06:45.101624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.314 [2024-11-20 18:06:45.102047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-11-20 18:06:45.102063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.314 [2024-11-20 18:06:45.102069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.314 [2024-11-20 18:06:45.102224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.314 [2024-11-20 18:06:45.102374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.314 [2024-11-20 18:06:45.102380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.314 [2024-11-20 18:06:45.102386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.314 [2024-11-20 18:06:45.104782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.314 [2024-11-20 18:06:45.114239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.314 [2024-11-20 18:06:45.114843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-11-20 18:06:45.114879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.314 [2024-11-20 18:06:45.114888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.314 [2024-11-20 18:06:45.115053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.314 [2024-11-20 18:06:45.115212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.314 [2024-11-20 18:06:45.115220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.314 [2024-11-20 18:06:45.115226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.314 [2024-11-20 18:06:45.117628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.314 [2024-11-20 18:06:45.126817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.314 [2024-11-20 18:06:45.127393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-11-20 18:06:45.127425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.314 [2024-11-20 18:06:45.127434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.314 [2024-11-20 18:06:45.127602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.314 [2024-11-20 18:06:45.127754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.314 [2024-11-20 18:06:45.127761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.314 [2024-11-20 18:06:45.127768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.314 [2024-11-20 18:06:45.130175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.314 [2024-11-20 18:06:45.139506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.314 [2024-11-20 18:06:45.139977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.314 [2024-11-20 18:06:45.140009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.314 [2024-11-20 18:06:45.140017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.314 [2024-11-20 18:06:45.140191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.315 [2024-11-20 18:06:45.140344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.315 [2024-11-20 18:06:45.140351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.315 [2024-11-20 18:06:45.140357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.315 [2024-11-20 18:06:45.142758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.315 [2024-11-20 18:06:45.152091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.315 [2024-11-20 18:06:45.152622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-11-20 18:06:45.152654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.315 [2024-11-20 18:06:45.152663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.315 [2024-11-20 18:06:45.152828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.315 [2024-11-20 18:06:45.152984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.315 [2024-11-20 18:06:45.152991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.315 [2024-11-20 18:06:45.152997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.315 [2024-11-20 18:06:45.155405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.315 4850.67 IOPS, 18.95 MiB/s [2024-11-20T17:06:45.231Z] [2024-11-20 18:06:45.164740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.315 [2024-11-20 18:06:45.165383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-11-20 18:06:45.165415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.315 [2024-11-20 18:06:45.165424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.315 [2024-11-20 18:06:45.165589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.315 [2024-11-20 18:06:45.165741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.315 [2024-11-20 18:06:45.165748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.315 [2024-11-20 18:06:45.165754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.315 [2024-11-20 18:06:45.168161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.315 [2024-11-20 18:06:45.177348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.315 [2024-11-20 18:06:45.177869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-11-20 18:06:45.177901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.315 [2024-11-20 18:06:45.177910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.315 [2024-11-20 18:06:45.178075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.315 [2024-11-20 18:06:45.178237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.315 [2024-11-20 18:06:45.178245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.315 [2024-11-20 18:06:45.178251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.315 [2024-11-20 18:06:45.180653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.315 [2024-11-20 18:06:45.189982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.315 [2024-11-20 18:06:45.190546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-11-20 18:06:45.190579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.315 [2024-11-20 18:06:45.190588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.315 [2024-11-20 18:06:45.190754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.315 [2024-11-20 18:06:45.190906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.315 [2024-11-20 18:06:45.190913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.315 [2024-11-20 18:06:45.190918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.315 [2024-11-20 18:06:45.193330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.315 [2024-11-20 18:06:45.202659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.315 [2024-11-20 18:06:45.203228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-11-20 18:06:45.203260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.315 [2024-11-20 18:06:45.203269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.315 [2024-11-20 18:06:45.203434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.315 [2024-11-20 18:06:45.203587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.315 [2024-11-20 18:06:45.203594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.315 [2024-11-20 18:06:45.203600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.315 [2024-11-20 18:06:45.206008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.315 [2024-11-20 18:06:45.215340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.315 [2024-11-20 18:06:45.215954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.315 [2024-11-20 18:06:45.215986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.315 [2024-11-20 18:06:45.215995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.315 [2024-11-20 18:06:45.216166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.315 [2024-11-20 18:06:45.216319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.315 [2024-11-20 18:06:45.216327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.315 [2024-11-20 18:06:45.216332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.315 [2024-11-20 18:06:45.218735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.577 [2024-11-20 18:06:45.227928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.577 [2024-11-20 18:06:45.228403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.577 [2024-11-20 18:06:45.228419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.577 [2024-11-20 18:06:45.228425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.577 [2024-11-20 18:06:45.228574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.577 [2024-11-20 18:06:45.228724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.577 [2024-11-20 18:06:45.228732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.577 [2024-11-20 18:06:45.228738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.577 [2024-11-20 18:06:45.231136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.577 [2024-11-20 18:06:45.240609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.577 [2024-11-20 18:06:45.241037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.577 [2024-11-20 18:06:45.241050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.577 [2024-11-20 18:06:45.241059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.577 [2024-11-20 18:06:45.241213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.577 [2024-11-20 18:06:45.241363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.577 [2024-11-20 18:06:45.241370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.577 [2024-11-20 18:06:45.241375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.577 [2024-11-20 18:06:45.243773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.577 [2024-11-20 18:06:45.253265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.577 [2024-11-20 18:06:45.253851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.577 [2024-11-20 18:06:45.253883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.577 [2024-11-20 18:06:45.253891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.577 [2024-11-20 18:06:45.254058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.577 [2024-11-20 18:06:45.254216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.577 [2024-11-20 18:06:45.254224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.577 [2024-11-20 18:06:45.254230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.577 [2024-11-20 18:06:45.256632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.577 [2024-11-20 18:06:45.265956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.577 [2024-11-20 18:06:45.266537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.577 [2024-11-20 18:06:45.266570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.577 [2024-11-20 18:06:45.266579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.577 [2024-11-20 18:06:45.266746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.577 [2024-11-20 18:06:45.266898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.577 [2024-11-20 18:06:45.266906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.577 [2024-11-20 18:06:45.266912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.577 [2024-11-20 18:06:45.269319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.577 [2024-11-20 18:06:45.278558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.577 [2024-11-20 18:06:45.279117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.577 [2024-11-20 18:06:45.279148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.577 [2024-11-20 18:06:45.279157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.577 [2024-11-20 18:06:45.279328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.577 [2024-11-20 18:06:45.279480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.577 [2024-11-20 18:06:45.279491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.577 [2024-11-20 18:06:45.279498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.577 [2024-11-20 18:06:45.281902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.577 [2024-11-20 18:06:45.291231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.577 [2024-11-20 18:06:45.291693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.577 [2024-11-20 18:06:45.291708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.577 [2024-11-20 18:06:45.291714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.577 [2024-11-20 18:06:45.291864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.577 [2024-11-20 18:06:45.292013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.577 [2024-11-20 18:06:45.292020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.577 [2024-11-20 18:06:45.292025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.577 [2024-11-20 18:06:45.294429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.577 [2024-11-20 18:06:45.303894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.577 [2024-11-20 18:06:45.304484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.577 [2024-11-20 18:06:45.304516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.577 [2024-11-20 18:06:45.304525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.577 [2024-11-20 18:06:45.304690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.577 [2024-11-20 18:06:45.304842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.577 [2024-11-20 18:06:45.304849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.577 [2024-11-20 18:06:45.304855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.577 [2024-11-20 18:06:45.307265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.578 [2024-11-20 18:06:45.316594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.578 [2024-11-20 18:06:45.317072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.578 [2024-11-20 18:06:45.317104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.578 [2024-11-20 18:06:45.317113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.578 [2024-11-20 18:06:45.317286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.578 [2024-11-20 18:06:45.317440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.578 [2024-11-20 18:06:45.317447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.578 [2024-11-20 18:06:45.317452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.578 [2024-11-20 18:06:45.319854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.578 [2024-11-20 18:06:45.329197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.578 [2024-11-20 18:06:45.329738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.578 [2024-11-20 18:06:45.329771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.578 [2024-11-20 18:06:45.329780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.578 [2024-11-20 18:06:45.329945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.578 [2024-11-20 18:06:45.330097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.578 [2024-11-20 18:06:45.330103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.578 [2024-11-20 18:06:45.330109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.578 [2024-11-20 18:06:45.332519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.578 [2024-11-20 18:06:45.341855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.578 [2024-11-20 18:06:45.342429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.578 [2024-11-20 18:06:45.342461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.578 [2024-11-20 18:06:45.342470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.578 [2024-11-20 18:06:45.342635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.578 [2024-11-20 18:06:45.342787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.578 [2024-11-20 18:06:45.342795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.578 [2024-11-20 18:06:45.342801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.578 [2024-11-20 18:06:45.345211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.578 [2024-11-20 18:06:45.354543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.578 [2024-11-20 18:06:45.355078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.578 [2024-11-20 18:06:45.355110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.578 [2024-11-20 18:06:45.355119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.578 [2024-11-20 18:06:45.355291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.578 [2024-11-20 18:06:45.355443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.578 [2024-11-20 18:06:45.355451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.578 [2024-11-20 18:06:45.355457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.578 [2024-11-20 18:06:45.357862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.578 [2024-11-20 18:06:45.367192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.578 [2024-11-20 18:06:45.367752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.578 [2024-11-20 18:06:45.367784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.578 [2024-11-20 18:06:45.367793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.578 [2024-11-20 18:06:45.367961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.578 [2024-11-20 18:06:45.368114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.578 [2024-11-20 18:06:45.368120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.578 [2024-11-20 18:06:45.368127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.578 [2024-11-20 18:06:45.370536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.578 [2024-11-20 18:06:45.379863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.578 [2024-11-20 18:06:45.380373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.578 [2024-11-20 18:06:45.380405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.578 [2024-11-20 18:06:45.380414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.578 [2024-11-20 18:06:45.380582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.578 [2024-11-20 18:06:45.380734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.578 [2024-11-20 18:06:45.380741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.578 [2024-11-20 18:06:45.380747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.578 [2024-11-20 18:06:45.383153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.578 [2024-11-20 18:06:45.392482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.578 [2024-11-20 18:06:45.392981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.578 [2024-11-20 18:06:45.392996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.578 [2024-11-20 18:06:45.393003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.578 [2024-11-20 18:06:45.393152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.578 [2024-11-20 18:06:45.393308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.578 [2024-11-20 18:06:45.393315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.578 [2024-11-20 18:06:45.393320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.578 [2024-11-20 18:06:45.395720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.578 [2024-11-20 18:06:45.405189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.578 [2024-11-20 18:06:45.405612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.578 [2024-11-20 18:06:45.405624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.578 [2024-11-20 18:06:45.405630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.578 [2024-11-20 18:06:45.405778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.578 [2024-11-20 18:06:45.405928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.578 [2024-11-20 18:06:45.405935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.578 [2024-11-20 18:06:45.405944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.578 [2024-11-20 18:06:45.408347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.578 [2024-11-20 18:06:45.417814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.578 [2024-11-20 18:06:45.418407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.578 [2024-11-20 18:06:45.418439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.578 [2024-11-20 18:06:45.418448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.578 [2024-11-20 18:06:45.418613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.578 [2024-11-20 18:06:45.418765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.578 [2024-11-20 18:06:45.418772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.578 [2024-11-20 18:06:45.418778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.578 [2024-11-20 18:06:45.421185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.578 [2024-11-20 18:06:45.430524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.578 [2024-11-20 18:06:45.430987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.578 [2024-11-20 18:06:45.431003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.578 [2024-11-20 18:06:45.431009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.578 [2024-11-20 18:06:45.431162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.578 [2024-11-20 18:06:45.431313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.578 [2024-11-20 18:06:45.431320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.578 [2024-11-20 18:06:45.431325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.578 [2024-11-20 18:06:45.433723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.578 [2024-11-20 18:06:45.443199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.578 [2024-11-20 18:06:45.443520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.579 [2024-11-20 18:06:45.443533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.579 [2024-11-20 18:06:45.443539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.579 [2024-11-20 18:06:45.443688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.579 [2024-11-20 18:06:45.443838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.579 [2024-11-20 18:06:45.443844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.579 [2024-11-20 18:06:45.443849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.579 [2024-11-20 18:06:45.446249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.579 [2024-11-20 18:06:45.455853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.579 [2024-11-20 18:06:45.456466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.579 [2024-11-20 18:06:45.456499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.579 [2024-11-20 18:06:45.456507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.579 [2024-11-20 18:06:45.456672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.579 [2024-11-20 18:06:45.456825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.579 [2024-11-20 18:06:45.456832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.579 [2024-11-20 18:06:45.456838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.579 [2024-11-20 18:06:45.459250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.579 [2024-11-20 18:06:45.468439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.579 [2024-11-20 18:06:45.469059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.579 [2024-11-20 18:06:45.469091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.579 [2024-11-20 18:06:45.469100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.579 [2024-11-20 18:06:45.469272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.579 [2024-11-20 18:06:45.469425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.579 [2024-11-20 18:06:45.469432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.579 [2024-11-20 18:06:45.469437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.579 [2024-11-20 18:06:45.471839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.579 [2024-11-20 18:06:45.481025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.579 [2024-11-20 18:06:45.481268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.579 [2024-11-20 18:06:45.481290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.579 [2024-11-20 18:06:45.481297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.579 [2024-11-20 18:06:45.481453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.579 [2024-11-20 18:06:45.481604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.579 [2024-11-20 18:06:45.481612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.579 [2024-11-20 18:06:45.481617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.579 [2024-11-20 18:06:45.484018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.841 [2024-11-20 18:06:45.493628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.841 [2024-11-20 18:06:45.494083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.841 [2024-11-20 18:06:45.494098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.841 [2024-11-20 18:06:45.494105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.841 [2024-11-20 18:06:45.494268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.841 [2024-11-20 18:06:45.494418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.841 [2024-11-20 18:06:45.494425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.841 [2024-11-20 18:06:45.494430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.841 [2024-11-20 18:06:45.496828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.841 [2024-11-20 18:06:45.506465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.841 [2024-11-20 18:06:45.506918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.841 [2024-11-20 18:06:45.506933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.841 [2024-11-20 18:06:45.506939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.841 [2024-11-20 18:06:45.507088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.841 [2024-11-20 18:06:45.507242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.842 [2024-11-20 18:06:45.507249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.842 [2024-11-20 18:06:45.507254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.842 [2024-11-20 18:06:45.509652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.842 [2024-11-20 18:06:45.519119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.842 [2024-11-20 18:06:45.519721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.842 [2024-11-20 18:06:45.519753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.842 [2024-11-20 18:06:45.519763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.842 [2024-11-20 18:06:45.519928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.842 [2024-11-20 18:06:45.520080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.842 [2024-11-20 18:06:45.520087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.842 [2024-11-20 18:06:45.520093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.842 [2024-11-20 18:06:45.522500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.842 [2024-11-20 18:06:45.531728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.842 [2024-11-20 18:06:45.532264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.842 [2024-11-20 18:06:45.532296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.842 [2024-11-20 18:06:45.532305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.842 [2024-11-20 18:06:45.532471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.842 [2024-11-20 18:06:45.532624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.842 [2024-11-20 18:06:45.532631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.842 [2024-11-20 18:06:45.532640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.842 [2024-11-20 18:06:45.535051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.842 [2024-11-20 18:06:45.544396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.842 [2024-11-20 18:06:45.545003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.842 [2024-11-20 18:06:45.545035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.842 [2024-11-20 18:06:45.545044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.842 [2024-11-20 18:06:45.545214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.842 [2024-11-20 18:06:45.545367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.842 [2024-11-20 18:06:45.545375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.842 [2024-11-20 18:06:45.545381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.842 [2024-11-20 18:06:45.547784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.842 [2024-11-20 18:06:45.556973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.842 [2024-11-20 18:06:45.557426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.842 [2024-11-20 18:06:45.557457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.842 [2024-11-20 18:06:45.557467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.842 [2024-11-20 18:06:45.557632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.842 [2024-11-20 18:06:45.557784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.842 [2024-11-20 18:06:45.557791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.842 [2024-11-20 18:06:45.557797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.842 [2024-11-20 18:06:45.560207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.842 [2024-11-20 18:06:45.569681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.842 [2024-11-20 18:06:45.570109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.842 [2024-11-20 18:06:45.570140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.842 [2024-11-20 18:06:45.570149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.842 [2024-11-20 18:06:45.570321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.842 [2024-11-20 18:06:45.570474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.842 [2024-11-20 18:06:45.570481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.842 [2024-11-20 18:06:45.570487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.842 [2024-11-20 18:06:45.572890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.842 [2024-11-20 18:06:45.582363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.842 [2024-11-20 18:06:45.582836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.842 [2024-11-20 18:06:45.582856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.842 [2024-11-20 18:06:45.582862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.842 [2024-11-20 18:06:45.583011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.842 [2024-11-20 18:06:45.583168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.842 [2024-11-20 18:06:45.583175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.842 [2024-11-20 18:06:45.583180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.842 [2024-11-20 18:06:45.585580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.842 [2024-11-20 18:06:45.595042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.842 [2024-11-20 18:06:45.595481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.842 [2024-11-20 18:06:45.595495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.842 [2024-11-20 18:06:45.595501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.842 [2024-11-20 18:06:45.595650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.842 [2024-11-20 18:06:45.595799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.842 [2024-11-20 18:06:45.595806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.842 [2024-11-20 18:06:45.595811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.842 [2024-11-20 18:06:45.598211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.842 [2024-11-20 18:06:45.607677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.842 [2024-11-20 18:06:45.608224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.842 [2024-11-20 18:06:45.608256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.842 [2024-11-20 18:06:45.608265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.842 [2024-11-20 18:06:45.608432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.842 [2024-11-20 18:06:45.608585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.842 [2024-11-20 18:06:45.608592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.842 [2024-11-20 18:06:45.608599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.842 [2024-11-20 18:06:45.611009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.842 [2024-11-20 18:06:45.620345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.842 [2024-11-20 18:06:45.620921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.842 [2024-11-20 18:06:45.620953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.842 [2024-11-20 18:06:45.620962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.842 [2024-11-20 18:06:45.621127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.842 [2024-11-20 18:06:45.621289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.842 [2024-11-20 18:06:45.621297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.842 [2024-11-20 18:06:45.621303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.842 [2024-11-20 18:06:45.623714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.842 [2024-11-20 18:06:45.633048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.842 [2024-11-20 18:06:45.633553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.842 [2024-11-20 18:06:45.633569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.842 [2024-11-20 18:06:45.633576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.842 [2024-11-20 18:06:45.633725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.842 [2024-11-20 18:06:45.633875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.843 [2024-11-20 18:06:45.633882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.843 [2024-11-20 18:06:45.633887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.843 [2024-11-20 18:06:45.636287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:45.843 [2024-11-20 18:06:45.645759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.843 [2024-11-20 18:06:45.646259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.843 [2024-11-20 18:06:45.646273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.843 [2024-11-20 18:06:45.646279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.843 [2024-11-20 18:06:45.646428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.843 [2024-11-20 18:06:45.646577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.843 [2024-11-20 18:06:45.646584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.843 [2024-11-20 18:06:45.646589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.843 [2024-11-20 18:06:45.648987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.843 [2024-11-20 18:06:45.658490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.843 [2024-11-20 18:06:45.658942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.843 [2024-11-20 18:06:45.658955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.843 [2024-11-20 18:06:45.658960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.843 [2024-11-20 18:06:45.659109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.843 [2024-11-20 18:06:45.659267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.843 [2024-11-20 18:06:45.659274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.843 [2024-11-20 18:06:45.659279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.843 [2024-11-20 18:06:45.661678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.843 [2024-11-20 18:06:45.671148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.843 [2024-11-20 18:06:45.671602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.843 [2024-11-20 18:06:45.671615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.843 [2024-11-20 18:06:45.671621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.843 [2024-11-20 18:06:45.671770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.843 [2024-11-20 18:06:45.671919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.843 [2024-11-20 18:06:45.671926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.843 [2024-11-20 18:06:45.671931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.843 [2024-11-20 18:06:45.674332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.843 [2024-11-20 18:06:45.683796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:45.843 [2024-11-20 18:06:45.684266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.843 [2024-11-20 18:06:45.684280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.843 [2024-11-20 18:06:45.684285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.843 [2024-11-20 18:06:45.684434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.843 [2024-11-20 18:06:45.684583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.843 [2024-11-20 18:06:45.684589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.843 [2024-11-20 18:06:45.684595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.843 [2024-11-20 18:06:45.686925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:45.843 [2024-11-20 18:06:45.686993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.843 [2024-11-20 18:06:45.696456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.843 [2024-11-20 18:06:45.696887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.843 [2024-11-20 18:06:45.696899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.843 [2024-11-20 18:06:45.696905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.843 [2024-11-20 18:06:45.697058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.843 [2024-11-20 18:06:45.697211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.843 [2024-11-20 18:06:45.697219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.843 [2024-11-20 18:06:45.697224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.843 [2024-11-20 18:06:45.699619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:45.843 [2024-11-20 18:06:45.709084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.843 [2024-11-20 18:06:45.709566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.843 [2024-11-20 18:06:45.709579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.843 [2024-11-20 18:06:45.709585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.843 [2024-11-20 18:06:45.709734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.843 [2024-11-20 18:06:45.709884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.843 [2024-11-20 18:06:45.709891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.843 [2024-11-20 18:06:45.709896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.843 [2024-11-20 18:06:45.712297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.843 [2024-11-20 18:06:45.721762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.843 [2024-11-20 18:06:45.722237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.843 [2024-11-20 18:06:45.722251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.843 [2024-11-20 18:06:45.722256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.843 [2024-11-20 18:06:45.722406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.843 [2024-11-20 18:06:45.722555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.843 [2024-11-20 18:06:45.722562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.843 [2024-11-20 18:06:45.722567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.843 Malloc0 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:45.843 [2024-11-20 18:06:45.724969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.843 [2024-11-20 18:06:45.734440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.843 [2024-11-20 18:06:45.734934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.843 [2024-11-20 18:06:45.734951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.843 [2024-11-20 18:06:45.734956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.843 [2024-11-20 18:06:45.735105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.843 [2024-11-20 18:06:45.735258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.843 [2024-11-20 18:06:45.735266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.843 [2024-11-20 18:06:45.735272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.843 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:45.844 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.844 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:45.844 [2024-11-20 18:06:45.737677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:45.844 [2024-11-20 18:06:45.747145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:45.844 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.844 [2024-11-20 18:06:45.747498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:45.844 [2024-11-20 18:06:45.747510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c0550 with addr=10.0.0.2, port=4420 00:39:45.844 [2024-11-20 18:06:45.747516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c0550 is same with the state(6) to be set 00:39:45.844 [2024-11-20 18:06:45.747664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0550 (9): Bad file descriptor 00:39:45.844 [2024-11-20 18:06:45.747814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:45.844 [2024-11-20 18:06:45.747821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:45.844 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:45.844 [2024-11-20 18:06:45.747826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:45.844 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.844 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:45.844 [2024-11-20 18:06:45.750228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:46.104 [2024-11-20 18:06:45.754580] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:46.104 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:46.104 [2024-11-20 18:06:45.759830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:46.104 18:06:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2936467 00:39:46.104 [2024-11-20 18:06:45.908046] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:47.307 4638.14 IOPS, 18.12 MiB/s [2024-11-20T17:06:48.608Z] 5675.75 IOPS, 22.17 MiB/s [2024-11-20T17:06:49.180Z] 6474.89 IOPS, 25.29 MiB/s [2024-11-20T17:06:50.561Z] 7124.40 IOPS, 27.83 MiB/s [2024-11-20T17:06:51.501Z] 7648.55 IOPS, 29.88 MiB/s [2024-11-20T17:06:52.441Z] 8106.92 IOPS, 31.67 MiB/s [2024-11-20T17:06:53.383Z] 8477.38 IOPS, 33.11 MiB/s [2024-11-20T17:06:54.324Z] 8794.64 IOPS, 34.35 MiB/s [2024-11-20T17:06:54.324Z] 9063.73 IOPS, 35.41 MiB/s 00:39:54.408 Latency(us) 00:39:54.408 [2024-11-20T17:06:54.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:54.408 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:54.408 Verification LBA range: start 0x0 length 0x4000 00:39:54.408 Nvme1n1 : 15.01 9067.61 35.42 13986.61 0.00 5534.15 546.13 13271.04 00:39:54.408 [2024-11-20T17:06:54.324Z] =================================================================================================================== 00:39:54.408 [2024-11-20T17:06:54.324Z] Total : 9067.61 35.42 13986.61 0.00 5534.15 546.13 13271.04 00:39:54.408 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:39:54.408 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:54.408 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.408 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:54.408 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.408 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:39:54.408 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:39:54.408 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:54.408 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:39:54.408 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:54.408 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:39:54.408 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:54.408 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:54.670 rmmod nvme_tcp 00:39:54.670 rmmod nvme_fabrics 00:39:54.670 rmmod nvme_keyring 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@513 -- # '[' -n 2937472 ']' 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # killprocess 2937472 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2937472 ']' 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2937472 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2937472 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2937472' 00:39:54.670 killing process with pid 2937472 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2937472 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2937472 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-save 00:39:54.670 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-restore 00:39:54.931 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:54.931 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:54.931 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:54.931 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:54.931 18:06:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:56.849 18:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:56.849 00:39:56.849 real 0m28.013s 00:39:56.849 user 1m3.245s 00:39:56.849 sys 0m7.502s 00:39:56.849 18:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:56.849 18:06:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:56.849 ************************************ 00:39:56.849 END TEST nvmf_bdevperf 00:39:56.849 ************************************ 00:39:56.850 18:06:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:56.850 18:06:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:56.850 18:06:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:56.850 18:06:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.850 ************************************ 00:39:56.850 START TEST nvmf_target_disconnect 00:39:56.850 ************************************ 00:39:56.850 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:57.112 * Looking for test storage... 00:39:57.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:57.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.112 --rc genhtml_branch_coverage=1 00:39:57.112 --rc genhtml_function_coverage=1 00:39:57.112 --rc genhtml_legend=1 00:39:57.112 --rc geninfo_all_blocks=1 00:39:57.112 --rc geninfo_unexecuted_blocks=1 00:39:57.112 00:39:57.112 ' 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:57.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.112 --rc genhtml_branch_coverage=1 00:39:57.112 --rc genhtml_function_coverage=1 00:39:57.112 --rc genhtml_legend=1 00:39:57.112 --rc geninfo_all_blocks=1 00:39:57.112 --rc geninfo_unexecuted_blocks=1 00:39:57.112 00:39:57.112 ' 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:57.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.112 --rc genhtml_branch_coverage=1 00:39:57.112 --rc genhtml_function_coverage=1 00:39:57.112 --rc genhtml_legend=1 00:39:57.112 --rc geninfo_all_blocks=1 00:39:57.112 --rc geninfo_unexecuted_blocks=1 00:39:57.112 00:39:57.112 ' 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:57.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.112 --rc genhtml_branch_coverage=1 00:39:57.112 --rc genhtml_function_coverage=1 00:39:57.112 --rc genhtml_legend=1 00:39:57.112 --rc geninfo_all_blocks=1 00:39:57.112 --rc geninfo_unexecuted_blocks=1 00:39:57.112 00:39:57.112 ' 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:57.112 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:57.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:39:57.113 18:06:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:05.250 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:05.250 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:05.250 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:05.250 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:05.250 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:05.251 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:05.251 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:05.251 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:05.251 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:05.251 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:05.251 18:07:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:05.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:05.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:40:05.251 00:40:05.251 --- 10.0.0.2 ping statistics --- 00:40:05.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:05.251 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:05.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:05.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:40:05.251 00:40:05.251 --- 10.0.0.1 ping statistics --- 00:40:05.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:05.251 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # return 0 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:05.251 ************************************ 00:40:05.251 START TEST nvmf_target_disconnect_tc1 00:40:05.251 ************************************ 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:05.251 [2024-11-20 18:07:04.268086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.251 [2024-11-20 18:07:04.268147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18951c0 with addr=10.0.0.2, port=4420 00:40:05.251 [2024-11-20 18:07:04.268179] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:40:05.251 [2024-11-20 18:07:04.268192] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:05.251 [2024-11-20 18:07:04.268200] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:40:05.251 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:40:05.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:40:05.251 Initializing NVMe Controllers 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:05.251 00:40:05.251 real 0m0.121s 00:40:05.251 user 0m0.046s 00:40:05.251 sys 0m0.074s 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:40:05.251 ************************************ 00:40:05.251 END TEST nvmf_target_disconnect_tc1 00:40:05.251 ************************************ 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:05.251 ************************************ 00:40:05.251 START TEST nvmf_target_disconnect_tc2 00:40:05.251 ************************************ 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=2943458 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 2943458 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2943458 ']' 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:05.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:05.251 18:07:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:05.251 [2024-11-20 18:07:04.394953] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:40:05.251 [2024-11-20 18:07:04.395002] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:05.251 [2024-11-20 18:07:04.475490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:05.251 [2024-11-20 18:07:04.508113] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:05.251 [2024-11-20 18:07:04.508150] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:05.251 [2024-11-20 18:07:04.508163] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:05.251 [2024-11-20 18:07:04.508171] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:05.251 [2024-11-20 18:07:04.508177] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:05.251 [2024-11-20 18:07:04.508326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:40:05.252 [2024-11-20 18:07:04.508475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:40:05.252 [2024-11-20 18:07:04.508624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:40:05.252 [2024-11-20 18:07:04.508625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:05.512 Malloc0 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:05.512 [2024-11-20 18:07:05.251167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:05.512 [2024-11-20 18:07:05.291601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:05.512 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.513 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2943575 00:40:05.513 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:40:05.513 18:07:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:07.425 18:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2943458 00:40:07.425 18:07:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 [2024-11-20 18:07:07.325806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Write completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 Read completed with error (sct=0, sc=8) 00:40:07.425 starting I/O failed 00:40:07.425 [2024-11-20 18:07:07.326140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.425 [2024-11-20 18:07:07.326611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.425 [2024-11-20 18:07:07.326653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.425 qpair failed and we were unable to recover it. 00:40:07.425 [2024-11-20 18:07:07.326913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.425 [2024-11-20 18:07:07.326925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.425 qpair failed and we were unable to recover it. 00:40:07.425 [2024-11-20 18:07:07.327132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.425 [2024-11-20 18:07:07.327146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.425 qpair failed and we were unable to recover it. 00:40:07.425 [2024-11-20 18:07:07.327499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.425 [2024-11-20 18:07:07.327538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.425 qpair failed and we were unable to recover it. 00:40:07.425 [2024-11-20 18:07:07.327775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.425 [2024-11-20 18:07:07.327789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.425 qpair failed and we were unable to recover it. 00:40:07.425 [2024-11-20 18:07:07.328121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.425 [2024-11-20 18:07:07.328134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.425 qpair failed and we were unable to recover it. 00:40:07.425 [2024-11-20 18:07:07.328459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.425 [2024-11-20 18:07:07.328471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.425 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.328795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.328807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.328999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.329012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.329272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.329285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.329568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.329580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.329925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.329937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.330127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.330139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.330418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.330434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.330754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.330766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.331049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.331061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.331399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.331411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.331719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.331732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.332060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.332072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.332138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.332148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.332376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.332388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.332707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.332718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.333051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.333063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.333433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.333446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.333741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.333753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.334054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.334066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.334465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.334478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.334746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.334758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.335699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.335724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.336045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.336058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.426 [2024-11-20 18:07:07.336267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.426 [2024-11-20 18:07:07.336280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.426 qpair failed and we were unable to recover it. 00:40:07.698 [2024-11-20 18:07:07.337297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.698 [2024-11-20 18:07:07.337322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.698 qpair failed and we were unable to recover it. 00:40:07.698 [2024-11-20 18:07:07.337613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.698 [2024-11-20 18:07:07.337627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.698 qpair failed and we were unable to recover it. 00:40:07.698 [2024-11-20 18:07:07.337905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.698 [2024-11-20 18:07:07.337916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.698 qpair failed and we were unable to recover it. 00:40:07.698 [2024-11-20 18:07:07.338112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.698 [2024-11-20 18:07:07.338123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.698 qpair failed and we were unable to recover it. 00:40:07.698 [2024-11-20 18:07:07.338202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.698 [2024-11-20 18:07:07.338214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.698 qpair failed and we were unable to recover it. 00:40:07.698 [2024-11-20 18:07:07.338521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.698 [2024-11-20 18:07:07.338533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.698 qpair failed and we were unable to recover it. 00:40:07.698 [2024-11-20 18:07:07.338835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.698 [2024-11-20 18:07:07.338846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.698 qpair failed and we were unable to recover it. 00:40:07.698 [2024-11-20 18:07:07.339128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.698 [2024-11-20 18:07:07.339139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.698 qpair failed and we were unable to recover it. 00:40:07.698 [2024-11-20 18:07:07.339436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.698 [2024-11-20 18:07:07.339448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.698 qpair failed and we were unable to recover it. 00:40:07.698 [2024-11-20 18:07:07.339729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.698 [2024-11-20 18:07:07.339741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.698 qpair failed and we were unable to recover it. 00:40:07.698 [2024-11-20 18:07:07.340021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.698 [2024-11-20 18:07:07.340032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.698 qpair failed and we were unable to recover it. 00:40:07.698 [2024-11-20 18:07:07.340401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.698 [2024-11-20 18:07:07.340413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.698 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.340677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.340689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.340970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.340981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.341288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.341299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.341569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.341579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.341911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.341922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.342206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.342217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.342422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.342433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.342715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.342726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.342926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.342936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.343200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.343211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.343537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.343550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.343803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.343813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.344143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.344155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.344490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.344503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.344782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.344793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.345060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.345070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.345379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.345391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.345718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.345729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.346062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.346073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.346455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.346466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.346750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.346761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.347077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.347088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.347911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.347934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.348219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.348231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.348533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.348544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.348850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.348862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.349139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.349150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.349519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.349531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.349797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.349809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.350092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.350105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.350390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.350402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.350679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.350690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.350987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.350998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.351283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.351294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.351595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.351605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.351906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.351918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.352222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.699 [2024-11-20 18:07:07.352236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.699 qpair failed and we were unable to recover it. 00:40:07.699 [2024-11-20 18:07:07.352519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.352530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.352743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.352753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.353044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.353056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.353254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.353265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.353677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.353688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.353979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.353989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.354278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.354289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.354582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.354595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.354877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.354888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.355180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.355191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.355469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.355481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.355808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.355820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.356134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.356145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.356336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.356353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.356664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.356675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.356953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.356965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.357868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.357889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.358192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.358208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.359103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.359127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.359432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.359448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.360196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.360221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.360551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.360572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.360744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.360759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.360961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.360978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.361373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.361388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.361670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.361683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.361995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.362009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.362323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.362338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.362657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.362670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.362950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.362973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.363361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.363375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.363649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.363663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.363980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.363993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.364267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.364280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.364600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.364613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.364943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.364957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.365231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.700 [2024-11-20 18:07:07.365244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.700 qpair failed and we were unable to recover it. 00:40:07.700 [2024-11-20 18:07:07.365437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.365450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.365769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.365781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.366111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.366125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.366466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.366480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.366649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.366662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.366953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.366966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.367145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.367163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.367468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.367482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.367651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.367665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.367955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.367969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.368279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.368294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.368489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.368502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.368774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.368787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.369068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.369081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.369393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.369406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.369760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.369773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.370103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.370125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.370473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.370492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.370809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.370827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.371118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.371135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.371359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.371378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.371707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.371725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.372052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.372069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.372371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.372390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.372699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.372717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.373061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.373079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.373376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.373394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.373749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.373768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.374137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.374155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.374474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.374494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.374782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.374800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.374989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.375007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.375328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.375346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.375685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.375703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.375992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.376010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.376222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.376241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.376596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.701 [2024-11-20 18:07:07.376615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.701 qpair failed and we were unable to recover it. 00:40:07.701 [2024-11-20 18:07:07.376819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.376836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.377170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.377190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.377383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.377400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.377759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.377776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.378054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.378071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.378441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.378459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.378792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.378811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.379112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.379130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.379353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.379371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.379651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.379669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.379986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.380003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.380327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.380347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.380624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.380641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.380958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.380976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.381187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.381208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.381514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.381532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.381821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.381839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.382141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.382165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.382446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.382464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.382803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.382824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.383127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.383146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.383484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.383506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.383733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.383754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.383992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.384017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.384375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.384398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.384718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.384741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.385022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.385044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.385480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.385502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.385795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.385818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.386121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.386144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.386471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.386494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.386788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.386809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.387121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.387142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.387513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.387537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.387856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.387878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.388171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.388195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.388562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.388584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.702 [2024-11-20 18:07:07.388895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.702 [2024-11-20 18:07:07.388918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.702 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.389278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.389301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.389655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.389677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.389986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.390009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.390276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.390298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.390673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.390695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.391007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.391029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.391253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.391275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.391615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.391636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.391990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.392012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.392350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.392373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.392710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.392731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.392941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.392963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.393220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.393243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.393525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.393547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.393871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.393893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.394205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.394229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.394544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.394566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.394885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.394907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.395097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.395122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.395447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.395470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.395679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.395700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.395921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.395948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.396178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.396201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.396508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.396530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.396857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.396887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.397230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.397261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.397566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.397597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.397814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.397843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.398099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.398128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.398483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.398514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.398865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.398895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.703 [2024-11-20 18:07:07.399218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.703 [2024-11-20 18:07:07.399249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.703 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.399610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.399640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.399986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.400016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.400379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.400409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.400726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.400757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.401127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.401157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.401537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.401566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.402039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.402070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.402403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.402434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.402675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.402704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.403027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.403056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.403366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.403396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.403768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.403797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.404131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.404169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.404579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.404609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.404834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.404863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.405097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.405128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.405529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.405561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.405931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.405961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.406329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.406360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.406693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.406724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.407078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.407108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.407460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.407492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.407833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.407862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.408221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.408252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.408550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.408583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.408938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.408966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.409322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.409353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.409683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.409714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.410077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.410106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.410463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.410501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.410825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.410855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.411252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.411283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.411600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.411629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.411949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.411979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.412301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.412332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.412661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.412690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.704 [2024-11-20 18:07:07.413034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.704 [2024-11-20 18:07:07.413065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.704 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.413414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.413445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.413756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.413787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.414189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.414220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.414456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.414489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.414863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.414893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.415136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.415174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.415551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.415581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.415897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.415928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.416210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.416240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.416670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.416699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.417084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.417113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.417463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.417495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.417843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.417872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.418261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.418291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.418620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.418650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.419036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.419067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.419405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.419435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.419661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.419690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.419918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.419948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.420318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.420349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.420728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.420757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.421080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.421110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.421455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.421487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.421726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.421756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.422028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.422057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.422421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.422452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.422845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.422875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.423207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.423237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.423594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.423623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.423964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.423994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.424340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.424370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.424749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.424778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.425135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.425185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.425537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.425567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.425796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.425826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.426189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.426220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.705 [2024-11-20 18:07:07.426592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.705 [2024-11-20 18:07:07.426622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.705 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.426983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.427014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.427285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.427316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.427659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.427690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.428043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.428072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.428470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.428500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.428707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.428736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.428990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.429019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.429342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.429371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.429727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.429756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.430107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.430137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.430544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.430575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.430901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.430930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.431313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.431343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.431493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.431522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.431848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.431877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.432229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.432260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.432603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.432633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.432883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.432911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.433229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.433259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.433585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.433615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.433948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.433977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.434213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.434247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.434582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.434612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.434860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.434889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.435244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.435275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.435593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.435623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.435964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.435993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.436339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.436369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.436743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.436772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.437109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.437138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.437567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.437597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.437823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.437852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.438188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.438219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.438587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.438617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.438981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.439010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.439355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.439390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.439794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.706 [2024-11-20 18:07:07.439824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.706 qpair failed and we were unable to recover it. 00:40:07.706 [2024-11-20 18:07:07.440150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.440190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.440539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.440569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.440883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.440912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.441363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.441396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.441753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.441783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.442113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.442142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.442504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.442535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.442860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.442889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.443126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.443155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.443538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.443569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.443799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.443828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.444180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.444211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.444649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.444679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.445029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.445058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.445422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.445453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.445757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.445787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.446148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.446195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.446449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.446478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.446812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.446842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.447172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.447203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.447619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.447648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.447957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.447988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.448248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.448279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.448606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.448635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.449071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.449100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.449466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.449498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.449859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.449888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.450239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.450270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.450519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.450548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.450864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.450893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.451097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.451127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.451515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.451548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.451903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.451932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.452185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.707 [2024-11-20 18:07:07.452215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.707 qpair failed and we were unable to recover it. 00:40:07.707 [2024-11-20 18:07:07.452568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.452598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.452828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.452861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.453202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.453233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.453568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.453598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.453931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.453967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.454328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.454358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.454636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.454664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.454995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.455024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.455279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.455310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.455662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.455691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.456019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.456048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.456441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.456472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.456802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.456832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.457229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.457260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.457627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.457656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.458011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.458041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.458428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.458459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.458773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.458803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.459151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.459191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.459574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.459604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.460007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.460037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.460409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.460439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.460753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.460782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.461156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.461199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.461542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.461571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.461976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.462006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.462241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.462273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.462656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.462686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.463039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.463069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.463314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.463346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.463583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.463611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.463951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.463981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.464308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.464339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.464673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.464703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.465028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.465057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.465415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.465448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.465685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.708 [2024-11-20 18:07:07.465717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.708 qpair failed and we were unable to recover it. 00:40:07.708 [2024-11-20 18:07:07.466041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.466071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.466446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.466476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.466831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.466860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.467177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.467209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.467559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.467589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.467936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.467966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.468350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.468381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.468757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.468793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.469013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.469047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.469306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.469337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.469694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.469723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.470107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.470137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.470489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.470520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.470768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.470796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.471027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.471056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.471311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.471341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.471652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.471683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.472049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.472080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.472348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.472379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.472689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.472719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.473060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.473091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.473423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.473454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.473832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.473862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.474198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.474229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.474589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.474619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.474928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.474958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.475207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.475237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.475587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.475617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.475965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.475994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.476340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.476372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.476723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.476753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.477074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.477103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.477449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.477480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.477827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.477857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.478099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.478129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.478489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.478520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.478821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.478850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.709 qpair failed and we were unable to recover it. 00:40:07.709 [2024-11-20 18:07:07.479079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.709 [2024-11-20 18:07:07.479108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.479424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.479457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.479750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.479779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.480114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.480143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.480532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.480562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.480934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.480964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.481321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.481353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.481693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.481723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.482054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.482084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.482433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.482464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.482801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.482836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.483197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.483228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.483570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.483599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.483960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.483989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.484330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.484361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.484736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.484766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.485169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.485199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.485535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.485564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.485918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.485948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.486296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.486327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.486568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.486598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.486984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.487015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.487342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.487372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.487629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.487657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.488021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.488051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.488333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.488365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.488743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.488772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.489073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.489103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.489438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.489469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.489873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.489904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.490181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.490211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.490549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.490580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.490815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.490846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.491190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.491221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.491566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.491596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.492005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.492034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.492423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.492454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.710 [2024-11-20 18:07:07.492790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.710 [2024-11-20 18:07:07.492821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.710 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.493168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.493199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.493568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.493599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.493944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.493974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.494335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.494366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.494697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.494726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.495085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.495114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.495444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.495477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.495785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.495815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.496187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.496218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.496440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.496469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.496778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.496808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.497214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.497245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.497627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.497656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.498038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.498067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.498396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.498427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.498777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.498808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.499140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.499177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.499617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.499646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.499990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.500020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.500269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.500298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.500647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.500676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.501020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.501050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.501395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.501425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.501804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.501834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.502185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.502215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.502440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.502469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.502707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.502736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.503086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.503116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.503449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.503480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.503764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.503793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.504173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.504203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.504578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.504610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.504955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.504985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.505252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.505282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.505635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.505664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.505947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.505976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.711 [2024-11-20 18:07:07.506312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.711 [2024-11-20 18:07:07.506343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.711 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.506654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.506684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.507025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.507056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.507394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.507430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.507750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.507780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.508107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.508136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.508474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.508504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.508878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.508908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.509231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.509262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.509602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.509631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.509951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.509980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.510330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.510361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.510735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.510764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.511021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.511049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.511488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.511518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.511862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.511891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.512283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.512313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.512654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.512684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.513088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.513118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.513525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.513557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.513802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.513830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.514188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.514222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.514604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.514634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.514968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.514998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.515360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.515390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.515724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.515753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.516104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.516134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.516466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.516498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.516818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.516847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.517195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.517225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.517618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.517649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.517963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.517992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.518334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.518366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.518724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.712 [2024-11-20 18:07:07.518753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.712 qpair failed and we were unable to recover it. 00:40:07.712 [2024-11-20 18:07:07.518975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.519004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.519331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.519362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.519733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.519764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.520093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.520123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.520499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.520530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.520885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.520914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.521235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.521266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.521611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.521640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.521998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.522027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.522242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.522277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.522619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.522648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.522986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.523015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.523388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.523419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.523675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.523704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.524066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.524095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.524262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.524294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.524601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.524630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.524919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.524948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.525291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.525321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.525670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.525700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.525924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.525956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.526229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.526260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.526604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.526634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.526859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.526888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.527129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.527170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.527513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.527542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.527894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.527924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.528257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.528288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.528590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.528620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.528950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.528979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.529352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.529382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.529720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.529750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.529976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.530005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.530250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.530281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.530713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.530742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.530971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.530999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.531334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.713 [2024-11-20 18:07:07.531365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.713 qpair failed and we were unable to recover it. 00:40:07.713 [2024-11-20 18:07:07.531737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.531765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.532122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.532151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.532478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.532509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.532862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.532891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.533222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.533254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.533610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.533641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.533989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.534019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.534383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.534413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.534729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.534760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.535117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.535147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.535509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.535539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.535883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.535913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.536208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.536244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.536565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.536595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.536922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.536950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.537280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.537311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.537652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.537682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.537991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.538020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.538398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.538429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.538782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.538812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.539063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.539092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.539383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.539415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.539781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.539810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.540121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.540150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.540513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.540542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.540914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.540943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.541276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.541308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.541672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.541701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.541920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.541953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.542315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.542345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.542684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.542714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.543054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.543084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.543372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.543402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.543756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.543785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.544173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.544204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.544554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.544583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.544798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.544827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.714 [2024-11-20 18:07:07.545136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.714 [2024-11-20 18:07:07.545183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.714 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.545501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.545531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.545953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.545984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.546318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.546350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.546691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.546721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.547048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.547078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.547408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.547438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.547693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.547721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.548096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.548126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.548494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.548525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.548840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.548870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.549229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.549260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.549613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.549643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.549963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.549993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.550330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.550361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.550712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.550747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.551069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.551099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.551321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.551351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.551669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.551699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.552013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.552043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.552401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.552432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.552800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.552830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.553153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.553208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.553561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.553590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.553921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.553951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.554275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.554307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.554650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.554679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.554996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.555027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.555375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.555407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.555661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.555694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.556077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.556107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.556464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.556494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.556838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.556868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.557209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.557240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.557391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.557421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.557726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.557754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.558137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.558175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.715 [2024-11-20 18:07:07.558480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.715 [2024-11-20 18:07:07.558510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.715 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.558836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.558866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.559190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.559220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.559582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.559611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.559927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.559956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.560292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.560322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.560652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.560681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.561036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.561066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.561285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.561314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.561525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.561554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.561897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.561926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.562260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.562290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.562642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.562671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.562996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.563026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.563350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.563380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.563711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.563739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.564084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.564114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.564383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.564414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.564790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.564824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.565047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.565079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.565308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.565339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.565480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.565512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.565860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.565890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.566173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.566204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.566579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.566609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.566945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.566974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.567214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.567244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.567537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.567566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.567917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.567946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.568318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.568349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.568717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.568747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.569054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.569083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.569442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.569473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.569789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.569818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.570176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.570208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.570577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.570607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.570962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.570991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.571335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.571367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.716 qpair failed and we were unable to recover it. 00:40:07.716 [2024-11-20 18:07:07.571751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.716 [2024-11-20 18:07:07.571780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.572139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.572178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.572622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.572652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.572990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.573019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.573346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.573377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.573631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.573660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.573953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.573987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.574263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.574294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.574609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.574638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.574962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.574992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.575373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.575403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.575732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.575761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.576176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.576206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.576532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.576563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.576792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.576821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.577145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.577184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.577580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.577610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.577941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.577971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.578344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.578375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.578730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.578760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.579121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.579156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.579499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.579528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.579872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.579902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.580151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.580204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.580436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.580467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.580795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.580825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.581199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.581230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.581595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.581624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.581963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.581992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.582241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.582272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.582658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.582688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.583021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.583051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.583388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.717 [2024-11-20 18:07:07.583418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.717 qpair failed and we were unable to recover it. 00:40:07.717 [2024-11-20 18:07:07.583801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.583831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.584065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.584095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.584454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.584485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.584751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.584780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.585028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.585057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.585178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.585207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.585543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.585572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.585938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.585968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.586333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.586364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.586732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.586761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.587028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.587059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.587417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.587448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.587670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.587701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.588077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.588107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.588480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.588512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.588767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.588796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.589193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.589224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.589548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.589579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.589904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.589933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.590269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.590300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.590644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.590675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.591012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.591041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.591286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.591318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.591639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.591669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.591924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.591952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.592228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.592259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.592611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.592641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.592991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.593027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.593255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.593286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.593712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.593743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.593961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.593990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.594295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.594327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.594667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.594696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.595081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.595111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.595464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.595495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.595874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.595903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.596177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.596208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.718 [2024-11-20 18:07:07.596525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.718 [2024-11-20 18:07:07.596554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.718 qpair failed and we were unable to recover it. 00:40:07.719 [2024-11-20 18:07:07.596889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.719 [2024-11-20 18:07:07.596919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.719 qpair failed and we were unable to recover it. 00:40:07.719 [2024-11-20 18:07:07.597286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.719 [2024-11-20 18:07:07.597318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.719 qpair failed and we were unable to recover it. 00:40:07.719 [2024-11-20 18:07:07.597648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.719 [2024-11-20 18:07:07.597678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.719 qpair failed and we were unable to recover it. 00:40:07.719 [2024-11-20 18:07:07.598023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.719 [2024-11-20 18:07:07.598053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.719 qpair failed and we were unable to recover it. 00:40:07.719 [2024-11-20 18:07:07.598337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.719 [2024-11-20 18:07:07.598367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.719 qpair failed and we were unable to recover it. 00:40:07.719 [2024-11-20 18:07:07.598681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.719 [2024-11-20 18:07:07.598711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.719 qpair failed and we were unable to recover it. 00:40:07.719 [2024-11-20 18:07:07.599029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.719 [2024-11-20 18:07:07.599060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.719 qpair failed and we were unable to recover it. 00:40:07.719 [2024-11-20 18:07:07.599402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.719 [2024-11-20 18:07:07.599433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.719 qpair failed and we were unable to recover it. 00:40:07.719 [2024-11-20 18:07:07.599764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.719 [2024-11-20 18:07:07.599795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.719 qpair failed and we were unable to recover it. 00:40:07.719 [2024-11-20 18:07:07.600099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.719 [2024-11-20 18:07:07.600129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.719 qpair failed and we were unable to recover it. 00:40:07.719 [2024-11-20 18:07:07.600329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.719 [2024-11-20 18:07:07.600359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.719 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.600756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.600786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.601112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.601142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.601497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.601528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.601754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.601784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.601917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.601947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.602196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.602232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.602591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.602620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.602927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.602956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.603199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.603231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.603615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.603645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.603992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.604021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.604376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.604407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.604781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.604812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.605176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.605206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.605595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.605625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.605972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.606001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.606227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.606257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.606611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.606640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.606862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.606902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.607206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.607237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.607545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.607577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.607902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.986 [2024-11-20 18:07:07.607933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.986 qpair failed and we were unable to recover it. 00:40:07.986 [2024-11-20 18:07:07.608271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.608301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.608530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.608559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.608776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.608805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.609046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.609076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.609438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.609469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.609782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.609812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.610133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.610184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.610538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.610569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.610804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.610833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.611189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.611221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.611609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.611639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.611977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.612007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.612370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.612401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.612707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.612737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.613003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.613033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.613280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.613310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.613667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.613697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.614010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.614041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.614385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.614416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.614750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.614781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.615144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.615183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.615529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.615560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.615937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.615967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.616309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.616342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.616677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.616709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.617022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.617053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.617405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.617436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.617755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.617786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.618135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.618185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.618554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.618586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.618915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.618944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.619179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.619210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.619552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.619581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.619944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.619974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.620334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.620364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.620712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.620742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.621073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.621109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.621474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.621506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.621844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.621874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.622196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.622228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.622487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.622515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.622872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.622902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.623172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.623202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.623450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.623483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.623795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.623825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.624181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.624212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.624537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.624566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.624907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.624936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.625257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.625288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.625585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.625615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.625962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.625993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.987 [2024-11-20 18:07:07.626328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.987 [2024-11-20 18:07:07.626359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.987 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.626723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.626752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.626987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.627017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.627341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.627372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.627712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.627743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.628097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.628127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.628250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.628284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.628619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.628650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.628872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.628905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.629248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.629279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.629592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.629620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.629936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.629966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.630298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.630331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.630682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.630712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.631011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.631041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.631370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.631400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.631736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.631767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.632120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.632149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.632456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.632487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.632821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.632851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.633177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.633208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.633565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.633594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.633816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.633844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.634211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.634242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.634473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.634505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.634876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.634912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.635254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.635286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.635614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.635643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.635983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.636012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.636303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.636333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.636642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.636671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.636994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.637023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.637333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.637365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.637729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.637759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.638064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.638093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.638418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.638449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.638788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.638818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.639182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.639213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.639566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.639596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.639816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.639849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.640096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.640126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.640472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.640503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.640814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.640843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.641042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.641071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.641397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.641427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.641774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.641803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.642137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.642176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.642402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.642431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.642752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.642781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.643136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.643177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.643515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.643545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.643895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.643926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.644239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.644270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.644579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.644609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.644906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.644935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.645271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.645302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.645518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.645550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.988 qpair failed and we were unable to recover it. 00:40:07.988 [2024-11-20 18:07:07.645870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.988 [2024-11-20 18:07:07.645900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.646138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.646190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.646484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.646513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.646842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.646872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.647124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.647154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.647473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.647503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.647742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.647772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.648118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.648148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.648481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.648517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.648835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.648865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.649192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.649224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.649571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.649600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.649902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.649934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.650234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.650265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.650608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.650637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.650966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.650995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.651315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.651346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.651638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.651668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.651984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.652013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.652322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.652355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.652692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.652722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.652916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.652945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.653278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.653308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.653649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.653678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.654018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.654047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.654366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.654398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.654745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.654775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.655093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.655122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.655507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.655539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.655838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.655867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.656066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.656096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.656441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.656472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.656833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.656863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.657169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.657201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.657554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.657583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.657935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.657964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.658253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.658285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.658613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.658643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.658881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.658909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.659265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.659296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.659609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.659638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.659999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.660028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.660321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.660352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.660498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.660526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.660835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.660864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.661179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.661210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.661451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.661480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.661797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.661827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.662049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.662083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.662396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.662427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.662806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.662836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.663129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.663168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.663565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.663594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.663825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.989 [2024-11-20 18:07:07.663857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.989 qpair failed and we were unable to recover it. 00:40:07.989 [2024-11-20 18:07:07.664214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.664245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.664578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.664608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.664829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.664859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.665209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.665240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.665560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.665590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.665917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.665948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.666189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.666221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.666459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.666490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.666798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.666828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.667167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.667199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.667517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.667546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.667904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.667933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.668153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.668191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.668457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.668486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.668837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.668867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.669218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.669249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.669568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.669598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.669933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.669962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.670191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.670222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.670505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.670535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.670873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.670903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.671176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.671207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.671519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.671549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.671870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.671899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.672241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.672273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.672626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.672656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.672959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.672988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.673293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.673326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.673670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.673700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.674035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.674064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.674273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.674305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.674642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.674674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.675026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.675056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.675287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.675317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.675525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.675559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.675921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.675951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.676168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.676198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.676581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.676610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.676837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.676867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.677079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.677108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.677515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.677547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.677886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.677916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.678264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.678295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.678659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.678689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.678998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.679027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.679380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.679410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.679617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.679645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.680001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.680031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.680398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.680430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.680753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.680784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.681144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.681181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.681519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.681548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.681756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.990 [2024-11-20 18:07:07.681785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.990 qpair failed and we were unable to recover it. 00:40:07.990 [2024-11-20 18:07:07.682041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.682070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.682378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.682408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.682650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.682680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.683009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.683040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.683258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.683288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.683641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.683670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.683972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.684002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.684371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.684401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.684725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.684761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.685057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.685087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.685410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.685441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.685767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.685797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.686035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.686065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.686379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.686410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.686719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.686748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.687086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.687116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.687470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.687502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.687836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.687866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.688193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.688225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.688548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.688579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.688896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.688925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.689281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.689312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.689647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.689676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.690010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.690041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.690325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.690355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.690718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.690747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.691067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.691097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.691446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.691477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.691713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.691742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.692092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.692121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.692411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.692442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.692759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.692789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.693128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.693168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.693498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.693527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.693850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.693880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.694208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.694239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.694565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.694595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.694951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.694980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.695231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.695261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.695606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.695637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.695944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.695975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.696338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.696368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.696686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.696714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.697056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.697086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.697391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.697422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.697778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.697807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.698113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.698143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.698502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.698532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.698856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.698892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.699234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.699265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.699644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.699673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.699995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.700025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.700377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.700408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.700628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.700661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.701040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.701069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.701392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.701424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.701753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.701783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.702143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.702183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.702482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.702513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.702817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.702846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.703062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.703095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.703465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.703496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.703814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.703845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.704237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.704267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.704583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.704612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.704841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.704873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.705084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.705114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.705486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.705517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.705791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.705819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.991 qpair failed and we were unable to recover it. 00:40:07.991 [2024-11-20 18:07:07.706186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.991 [2024-11-20 18:07:07.706217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.706542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.706571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.706913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.706942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.707273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.707304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.707634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.707663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.708013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.708042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.708401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.708433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.708741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.708771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.709074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.709102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.709467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.709498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.709815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.709845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.710179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.710212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.710535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.710564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.710883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.710913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.711225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.711256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.711575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.711606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.711940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.711969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.712296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.712327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.712656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.712685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.713038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.713075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.713431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.713462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.713770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.713801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.714130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.714167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.714499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.714529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.714832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.714862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.715215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.715246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.715564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.715593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.715916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.715947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.716370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.716400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.716744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.716773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.717090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.717121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.717487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.717519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.717856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.717886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.718135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.718178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.718517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.718547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.718864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.718893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.719129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.719167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.719496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.719526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.719846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.719875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.720213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.720244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.720490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.720519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.720885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.720915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.721282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.721314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.721626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.721656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.722026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.722055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.722388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.722419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.722760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.722790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.723119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.723150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.723419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.723451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.723817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.723847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.724178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.724210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.724452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.724481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.724784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.724813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.725172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.725202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.725549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.725579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.725913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.725944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.726294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.726324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.726630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.726660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.726996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.727025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.727381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.727418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.727763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.727793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.728110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.728139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.728408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.728442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.992 [2024-11-20 18:07:07.728783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.992 [2024-11-20 18:07:07.728813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.992 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.729154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.729194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.729514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.729545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.729887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.729915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.730246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.730277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.730661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.730691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.731005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.731035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.731393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.731424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.731758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.731788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.732094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.732123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.732474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.732506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.732851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.732881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.733226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.733256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.733559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.733589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.733945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.733975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.734305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.734336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.734669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.734698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.735067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.735097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.735408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.735439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.735780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.735809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.736059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.736091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.736462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.736494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.736853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.736883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.737209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.737239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.737585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.737615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.737964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.737994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.738303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.738332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.738673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.738703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.738929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.738958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.739300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.739329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.739657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.739686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.740005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.740034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.740396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.740427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.740777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.740806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.741179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.741209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.741540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.741570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.741914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.741949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.742287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.742318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.742679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.742710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.743021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.743050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.743383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.743415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.743750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.743780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.744131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.744181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.744567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.744596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.744928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.744958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.745314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.745345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.745712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.745741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.746065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.746095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.746433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.746463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.746703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.746735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.747056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.747087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.747426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.747456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.747812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.747842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.748197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.748228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.748557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.748588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.748926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.748955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.749302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.749333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.749680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.749710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.750024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.750053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.750413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.750443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.750800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.750830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.751184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.751215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.751579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.751608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.751934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.751964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.752286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.752317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.752632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.752661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.752996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.753025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.753372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.753403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.753741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.753771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.754142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.754181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.754554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.754584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.993 qpair failed and we were unable to recover it. 00:40:07.993 [2024-11-20 18:07:07.754909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.993 [2024-11-20 18:07:07.754939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.755301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.755331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.755709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.755738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.756060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.756088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.756418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.756449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.756810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.756845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.757193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.757224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.757567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.757597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.757942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.757971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.758315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.758344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.758687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.758717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.759087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.759116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.759468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.759500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.759834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.759864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.760197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.760229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.760576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.760606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.760939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.760968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.761317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.761349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.761710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.761739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.762057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.762087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.762432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.762462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.762802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.762831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.763191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.763222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.763585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.763615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.763941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.763970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.764325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.764356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.764717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.764747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.765065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.765095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.765510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.765541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.765875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.765906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.766273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.766304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.766623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.766653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.766992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.767022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.767381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.767412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.767766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.767795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.768174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.768206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.768552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.768581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.768898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.768928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.769270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.769301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.769648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.769678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.770017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.770048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.770429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.770459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.770773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.770804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.771126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.771155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.771419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.771451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.771780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.771815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.772182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.772214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.772556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.772588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.772946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.772975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.773330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.773360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.773701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.773731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.774049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.774080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.774422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.774452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.774796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.774825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.775178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.775208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.775575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.775605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.776023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.776054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.776287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.776318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.776668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.776698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.777027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.777058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.777404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.777436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.777763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.777792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.778144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.778183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.778569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.778599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.778924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.778953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.779311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.779341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.994 [2024-11-20 18:07:07.779659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.994 [2024-11-20 18:07:07.779691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.994 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.779991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.780020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.780346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.780377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.780722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.780751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.781106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.781135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.781448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.781479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.781807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.781837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.782176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.782207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.782600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.782630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.782956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.782986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.783328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.783361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.783694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.783725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.784092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.784121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.784486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.784518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.784868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.784898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.785242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.785273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.785664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.785693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.785999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.786028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.786388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.786418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.786751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.786785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.787148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.787188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.787529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.787558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.787892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.787922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.788279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.788309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.788664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.788694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.789012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.789041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.789401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.789432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.789780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.789810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.790034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.790063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.790393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.790423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.790778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.790807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.791151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.791190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.791524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.791555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.791891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.791920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.792262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.792294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.792638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.792667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.793022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.793051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.793381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.793411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.793758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.793787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.794119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.794149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.794492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.794523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.794900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.794929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.795268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.795300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.795627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.795657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.796001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.796031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.796373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.796404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.796634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.796667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.797007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.797037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.797382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.797413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.797718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.797748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.798102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.798132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.798452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.798483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.798836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.798865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.799186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.799217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.799553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.799582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.800002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.800032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.800254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.800289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.800668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.800699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.801070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.801100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.801470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.801508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.801725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.801758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.802073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.802103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.802434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.802465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.802817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.802847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.803087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.803120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.803502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.803533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.803852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.803882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.804222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.804253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.804603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.804633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.804993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.805023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.805336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.805366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.995 qpair failed and we were unable to recover it. 00:40:07.995 [2024-11-20 18:07:07.805721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.995 [2024-11-20 18:07:07.805750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.806106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.806136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.806456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.806488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.806820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.806849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.807189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.807220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.807570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.807598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.807922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.807952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.808189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.808223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.808567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.808598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.808948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.808979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.809331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.809362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.809701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.809731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.810115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.810144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.810478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.810509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.810830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.810859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.811223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.811255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.811622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.811653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.812020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.812051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.812376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.812406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.812751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.812781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.813125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.813154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.813496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.813527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.813848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.813878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.814211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.814242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.814563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.814594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.814966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.814995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.815333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.815364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.815780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.815810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.816184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.816222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.816536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.816567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.816896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.816926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.817321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.817351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.817729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.817758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.818126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.818156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.818521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.818551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.818884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.818913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.819278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.819309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.819702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.819732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.820043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.820072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.820408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.820439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.820784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.820813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.821176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.821208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.821595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.821624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.821957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.821988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.822340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.822371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.822716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.822746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.823069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.823098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.823529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.823559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.823891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.823921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.824269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.824300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.824629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.824658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.825010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.825040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.825421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.825452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.825753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.825781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.826084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.826113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.826452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.826484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.826838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.826867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.827180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.827210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.827544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.827574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.827911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.827941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.828278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.828308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.828621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.828652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.828953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.828983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.829332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.829363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.829708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.829738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.829963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.829991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.996 qpair failed and we were unable to recover it. 00:40:07.996 [2024-11-20 18:07:07.830309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.996 [2024-11-20 18:07:07.830340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.830684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.830714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.831041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.831075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.831384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.831414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.831729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.831758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.832062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.832091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.832439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.832470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.832819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.832848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.833077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.833108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.833540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.833571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.833913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.833944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.834266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.834295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.834635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.834665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.834992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.835023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.835335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.835365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.835682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.835713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.836030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.836060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.836410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.836440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.836798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.836828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.837185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.837216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.837630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.837660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.837986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.838017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.838330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.838360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.838725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.838755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.839101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.839132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.839456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.839486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.839827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.839857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.840211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.840244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.840506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.840535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.840873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.840904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.841231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.841264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.841648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.841678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.841995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.842024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.842337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.842368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.842594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.842624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.842945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.842974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.843302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.843335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.843684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.843713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.844053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.844084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.844430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.844461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.844771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.844801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.845152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.845206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.845545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.845581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.845812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.845847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.846068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.846097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.846470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.846501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.846845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.846877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.847194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.847224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.847574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.847604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.847939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.847969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.848310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.848340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.848715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.848745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.849059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.849089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.849423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.849454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.849676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.849709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.850035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.850064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.850382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.850412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.850767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.850797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.851154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.851193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.851576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.851606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.851911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.851941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.852247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.852277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.852625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.852655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.853003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.853032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.853356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.997 [2024-11-20 18:07:07.853388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.997 qpair failed and we were unable to recover it. 00:40:07.997 [2024-11-20 18:07:07.853739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.853769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.854108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.854138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.854490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.854522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.854835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.854864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.855208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.855240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.855594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.855625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.855944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.855972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.856314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.856344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.856583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.856615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.856937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.856967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.857224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.857256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.857591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.857622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.857946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.857977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.858308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.858339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.858689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.858720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.859033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.859062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.859404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.859435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.859754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.859790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.860003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.860032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.860338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.860372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.860490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.860521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.860891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.860921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.861150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.861204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.861556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.861586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.861933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.861963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.862216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.862247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.862572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.862602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.862914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.862943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.863177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.863207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.863579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.863608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.863941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.863971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.864340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.864371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.864709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.864739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.865071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.865101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.865456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.865489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.865803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.865833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.866177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.866208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.866540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.866570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.866814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.866844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.867152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.867192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.867533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.867564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.867918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.867947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.868305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.868335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.868654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.868685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.869030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.869064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.869410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.869441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.869807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.869838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.870155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.870204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.870508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.870538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.870873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.870903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.871209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.871240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.871588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.871617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.871957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.871988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.872327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.872357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.872701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.872730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.873046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.873077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.873420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.873452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.873791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.873821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.874177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.874209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.874453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.874483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.874795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.874826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.875173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.875204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.875558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.875588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.875953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.875983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.876182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.876211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.876566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.876596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.876917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.876948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.877191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.877221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.877472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.877502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.998 [2024-11-20 18:07:07.877866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.998 [2024-11-20 18:07:07.877896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.998 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.878191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.878222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.878576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.878607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.878953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.878984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.879248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.879278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.879620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.879649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.879968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.879997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.880325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.880356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.880702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.880732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.881059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.881090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.881464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.881495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.881795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.881825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.882174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.882205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.882465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.882493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.882811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.882841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.883191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.883228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.883449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.883478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.883855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.883886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.884207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.884238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.884581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.884612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.884953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.884983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.885325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.885356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.885690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.885719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.886065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.886094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.886449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.886480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.886846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.886877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.887190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.887220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.887554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.887585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.887932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.887961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.888282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.888312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.888658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.888688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.889025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.889055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.889272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.889303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.889667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.889698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.890015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.890045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.890345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.890376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.890702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.890733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.891090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.891119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:07.999 [2024-11-20 18:07:07.891486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:07.999 [2024-11-20 18:07:07.891519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:07.999 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.891857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.891889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.275 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.892219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.892250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.275 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.892574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.892603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.275 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.892926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.892955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.275 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.893280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.893311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.275 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.893644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.893674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.275 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.894030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.894060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.275 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.894386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.894418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.275 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.894753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.894784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.275 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.895136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.895177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.275 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.895565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.895594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.275 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.895912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.895942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.275 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.896289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.896322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.275 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.896647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.896678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.275 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.897008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.897038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.275 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.897381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.897411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.275 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.897734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.897772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.275 qpair failed and we were unable to recover it. 00:40:08.275 [2024-11-20 18:07:07.898080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.275 [2024-11-20 18:07:07.898111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.898437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.898468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.898792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.898820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.899148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.899187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.899526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.899556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.899910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.899939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.900255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.900286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.900623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.900652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.900965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.900995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.901333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.901364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.901687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.901717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.902039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.902069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.902413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.902445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.902762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.902792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.903118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.903148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.903492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.903522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.903844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.903872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.904211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.904242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.904577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.904607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.904939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.904970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.905314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.905347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.905666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.905695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.906029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.906059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.906394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.906426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.906763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.906793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.907110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.907140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.907501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.907531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.907858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.907889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.908217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.908248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.908579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.908609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.908931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.908962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.909308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.909338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.909669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.909699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.910029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.910059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.910388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.910421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.910747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.910776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.911104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.911134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.276 qpair failed and we were unable to recover it. 00:40:08.276 [2024-11-20 18:07:07.911519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.276 [2024-11-20 18:07:07.911550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.911878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.911909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.912223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.912259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.912604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.912634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.912967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.912998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.913335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.913365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.913696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.913727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.914042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.914071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.914404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.914436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.914767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.914798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.915123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.915153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.915511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.915542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.915857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.915887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.916200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.916230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.916562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.916590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.916924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.916957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.917292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.917323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.917633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.917662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.918001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.918031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.918258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.918289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.918617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.918646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.918980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.919010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.919348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.919381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.919715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.919745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.920087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.920118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.920476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.920507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.920833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.920865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.921221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.921253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.921587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.921617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.921946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.921977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.922315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.922349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.922670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.922700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.923036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.923067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.923389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.923419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.923756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.923787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.924170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.924202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.924553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.924583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.277 [2024-11-20 18:07:07.924919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.277 [2024-11-20 18:07:07.924948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.277 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.925282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.925313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.925642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.925672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.926094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.926124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.926440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.926471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.926804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.926844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.927177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.927211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.927552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.927582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.927907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.927936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.928274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.928305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.928642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.928671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.929011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.929040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.929378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.929412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.929760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.929790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.930020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.930052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.930283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.930315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.930645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.930674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.931044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.931075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.931403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.931435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.931683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.931716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.932040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.932072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.932413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.932444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.932791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.932822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.933225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.933255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.933586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.933617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.933977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.934006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.934337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.934371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.934718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.934747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.935109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.935140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.935503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.935534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.935873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.935903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.936246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.936277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.936635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.936667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.278 [2024-11-20 18:07:07.936997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.278 [2024-11-20 18:07:07.937027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.278 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.937393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.937425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.937752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.937783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.938093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.938123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.938469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.938502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.938829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.938858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.939212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.939244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.939601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.939631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.939980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.940011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.940359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.940389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.940732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.940761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.941084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.941114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.941480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.941516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.941860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.941892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.942236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.942267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.942641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.942672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.942904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.942937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.943277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.943308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.943650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.943681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.944002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.944032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.944365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.944395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.944742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.944771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.945129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.945168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.945513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.945544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.945901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.945931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.946270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.946302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.946652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.946682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.947050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.947082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.947423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.947454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.947801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.947832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.948262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.948293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.948603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.948635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.948968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.948997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.949287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.949318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.949694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.949724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.950040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.950068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.950420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.950452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.950840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.279 [2024-11-20 18:07:07.950871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.279 qpair failed and we were unable to recover it. 00:40:08.279 [2024-11-20 18:07:07.951213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.951244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.951567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.951600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.951955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.951985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.952332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.952365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.952691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.952723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.953043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.953074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.953427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.953459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.953803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.953835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.954187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.954219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.954554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.954584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.954949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.954981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.955322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.955354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.955698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.955729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.956067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.956099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.956429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.956466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.956806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.956837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.957178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.957209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.957535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.957565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.957910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.957941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.958291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.958324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.958667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.958699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.959068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.959098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.959511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.959543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.959763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.959796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.960146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.960186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.960536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.960568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.960916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.960947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.961298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.961329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.961687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.961717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.962054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.962087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.280 [2024-11-20 18:07:07.962430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.280 [2024-11-20 18:07:07.962460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.280 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.962802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.962834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.963190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.963222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.963449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.963482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.963829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.963859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.964103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.964134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.964503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.964533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.964855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.964885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.965232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.965264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.965596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.965628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.965858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.965888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.966209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.966240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.966596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.966626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.966964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.966996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.967345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.967376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.967741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.967772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.968006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.968039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.968402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.968434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.968792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.968823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.969175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.969208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.969539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.969569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.969886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.969916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.970228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.970260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.970583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.970612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.970842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.970880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.971224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.971256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.971611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.971642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.971966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.971996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.972342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.972373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.972719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.972751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.973101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.973131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.973467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.973501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.973834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.973864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.974216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.974248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.974612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.974642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.974990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.975021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.281 [2024-11-20 18:07:07.975329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.281 [2024-11-20 18:07:07.975361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.281 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.975739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.975770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.976112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.976144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.976528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.976559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.976890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.976922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.977254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.977287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.977653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.977683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.977998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.978028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.978385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.978419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.978770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.978800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.979133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.979174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.979592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.979623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.979943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.979972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.980309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.980340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.980689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.980720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.981078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.981108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.981506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.981537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.981878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.981910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.982279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.982309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.982645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.982675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.983000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.983031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.983276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.983307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.983679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.983710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.984108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.984140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.984513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.984543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.984884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.984916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.985189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.985220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.985559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.985590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.985918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.985956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.986300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.986332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.986667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.986699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.987021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.987052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.987384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.987416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.987632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.987666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.988001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.988031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.988387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.988419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.988723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.988754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.989099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.989129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.282 qpair failed and we were unable to recover it. 00:40:08.282 [2024-11-20 18:07:07.989515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.282 [2024-11-20 18:07:07.989549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.989909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.989939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.990194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.990225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.990571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.990602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.990953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.990985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.991348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.991379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.991752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.991782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.992132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.992174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.992403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.992436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.992780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.992811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.993181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.993213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.993540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.993570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.993916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.993949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.994284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.994315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.994638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.994668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.995015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.995045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.995405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.995437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.995790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.995821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.996212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.996245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.996601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.996632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.996977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.997008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.997339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.997370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.997603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.997636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.997996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.998026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.998383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.998415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.998741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.998771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.999137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.999178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.999552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.999583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:07.999917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:07.999950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:08.000298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:08.000329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:08.000573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:08.000618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:08.000987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:08.001018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:08.001400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:08.001433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:08.001753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:08.001785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.283 [2024-11-20 18:07:08.002094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.283 [2024-11-20 18:07:08.002124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.283 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.002511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.002543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.002886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.002918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.003274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.003306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.003622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.003653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.003995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.004026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.004397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.004429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.004756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.004786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.005142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.005182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.005515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.005544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.005889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.005919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.006237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.006268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.006600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.006630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.006977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.007009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.007346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.007377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.007712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.007744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.008071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.008101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.008451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.008485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.008832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.008862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.009190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.009220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.009602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.009633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.009987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.010019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.010292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.010322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.010668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.010702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.011014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.011045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.011405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.011436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.011777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.011808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.012126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.012157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.012478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.012508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.012844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.012875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.013217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.013248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.013578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.013611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.013968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.013999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.014331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.014364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.284 [2024-11-20 18:07:08.014760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.284 [2024-11-20 18:07:08.014791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.284 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.015136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.015178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.015565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.015601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.015944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.015977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.016315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.016347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.016671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.016702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.017029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.017059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.017431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.017464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.017811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.017842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.018175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.018207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.018554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.018584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.018935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.018968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.019369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.019402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.019728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.019758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.020086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.020117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.020454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.020488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.020832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.020861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.021189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.021220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.021572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.021602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.021945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.021974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.022341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.022372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.022725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.022757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.023080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.023109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.023464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.023498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.023842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.023873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.024195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.024226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.024570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.024600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.024953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.024985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.025337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.025369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.025698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.025728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.026122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.026152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.026534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.026566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.285 qpair failed and we were unable to recover it. 00:40:08.285 [2024-11-20 18:07:08.026914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.285 [2024-11-20 18:07:08.026944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.027280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.027311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.027699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.027729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.028076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.028108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.028444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.028477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.028799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.028829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.029200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.029231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.029604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.029635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.029971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.030003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.030401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.030432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.030789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.030826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.031171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.031206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.031547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.031578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.031938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.031971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.032354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.032387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.032723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.032755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.033101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.033132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.033457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.033490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.033803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.033834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.034068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.034101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.034482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.034514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.034846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.034878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.035244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.035277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.035602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.035634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.035995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.036027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.036431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.036463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.036780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.036810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.037169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.037202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.037560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.037591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.037911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.037944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.038258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.038290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.038515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.038548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.038900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.038931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.039276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.039309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.039686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.039715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.040074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.040107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.040399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.040432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.040783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.286 [2024-11-20 18:07:08.040816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.286 qpair failed and we were unable to recover it. 00:40:08.286 [2024-11-20 18:07:08.041187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.041220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.041561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.041594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.041941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.041971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.042303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.042334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.042717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.042750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.043119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.043152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.043487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.043518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.043866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.043899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.044235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.044267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.044612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.044645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.044992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.045022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.045340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.045371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.045704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.045741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.046091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.046123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.046455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.046488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.046817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.046846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.047215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.047246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.047604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.047636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.047965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.047995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.048349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.048380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.048728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.048758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.049101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.049135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.049522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.049554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.049876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.049907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.050237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.050268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.050629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.050660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.051011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.051041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.051389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.051423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.051747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.051778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.052127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.052166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.052503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.052533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.052779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.052809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.053177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.053209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.053565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.053597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.053849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.053880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.054253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.054285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.054625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.054655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.287 [2024-11-20 18:07:08.055008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.287 [2024-11-20 18:07:08.055041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.287 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.055395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.055426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.055758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.055796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.056114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.056145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.056539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.056570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.056931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.056962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.057289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.057320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.057657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.057687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.058054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.058084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.058435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.058467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.058791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.058822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.059146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.059186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.059546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.059576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.059925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.059956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.060304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.060336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.060647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.060677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.061061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.061091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.061461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.061494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.061860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.061891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.062116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.062150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.062525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.062557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.062907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.062940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.063268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.063300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.063641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.063674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.064061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.064091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.064438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.064471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.064869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.064901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.065265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.065296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.065636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.065668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.066021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.066052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.066393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.066426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.066660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.066694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.067054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.067085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.067448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.067481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.067816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.067847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.068087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.068120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.068472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.068504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.288 qpair failed and we were unable to recover it. 00:40:08.288 [2024-11-20 18:07:08.068847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.288 [2024-11-20 18:07:08.068876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.069232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.069264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.069652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.069683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.070024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.070056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.070414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.070446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.070685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.070720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.071045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.071075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.071432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.071464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.071715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.071748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.072075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.072108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.072474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.072507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.072831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.072861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.073218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.073250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.073574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.073603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.073961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.073992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.074358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.074390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.074746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.074777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.075141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.075182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.075573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.075603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.075967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.075998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.076346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.076378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.076688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.076719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.077091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.077122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.077475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.077508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.077858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.077889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.078241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.078274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.078627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.078658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.079019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.079049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.079391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.079424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.079748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.079778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.080104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.080136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.080581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.080613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.080961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.080992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.289 qpair failed and we were unable to recover it. 00:40:08.289 [2024-11-20 18:07:08.081337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.289 [2024-11-20 18:07:08.081368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.081715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.081747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.082100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.082130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.082486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.082520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.082752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.082786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.083108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.083141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.083379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.083413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.083766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.083797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.084124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.084154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.084493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.084527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.084933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.084963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.085219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.085251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.085615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.085652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.086026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.086058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.086400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.086433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.086795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.086828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.087154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.087195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.087522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.087554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.087828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.087859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.088207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.088240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.088603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.088634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.088953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.088982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.089348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.089379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.089719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.089749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.090089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.090120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.090538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.090571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.090910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.090942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.091289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.091321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.091672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.091703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.091842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.091875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.092273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.092305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.092673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.092705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.093071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.093103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.093444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.093476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.093714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.093748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.094095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.094127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.094471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.094503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.094841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.290 [2024-11-20 18:07:08.094872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.290 qpair failed and we were unable to recover it. 00:40:08.290 [2024-11-20 18:07:08.095205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.095237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.095592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.095626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.095960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.095991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.096327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.096358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.096732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.096763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.097134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.097178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.097508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.097538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.097880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.097909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.098187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.098219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.098588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.098619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.098949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.098981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.099443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.099475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.099852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.099884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.100260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.100292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.100634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.100671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.101002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.101033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.101264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.101298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.101627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.101659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.102039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.102070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.102426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.102458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.102807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.102838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.103207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.103239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.103597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.103629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.103944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.103976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.104209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.104240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.104578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.104607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.104966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.104997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.105329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.105360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.105643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.105673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.105932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.105965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.106286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.106318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.106666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.106699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.107058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.107090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.107456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.107487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.107812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.107842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.108176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.108208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.108554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.291 [2024-11-20 18:07:08.108584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.291 qpair failed and we were unable to recover it. 00:40:08.291 [2024-11-20 18:07:08.108947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.108977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.109315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.109348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.109709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.109739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.110086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.110118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.110492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.110524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.110836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.110866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.111213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.111245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.111635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.111666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.112008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.112038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.112259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.112291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.112655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.112687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.113031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.113062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.113435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.113465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.113796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.113828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.114169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.114201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.114545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.114575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.114926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.114957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.115181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.115218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.115646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.115678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.116012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.116044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.116408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.116439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.116764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.116794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.117045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.117075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.117414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.117448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.117804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.117835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.118198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.118231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.118587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.118621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.118962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.118994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.119337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.119368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.119713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.119745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.120083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.120115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.120502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.120534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.120895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.120926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.121309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.121341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.121669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.121700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.122068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.122099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.122476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.122509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.292 [2024-11-20 18:07:08.122833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.292 [2024-11-20 18:07:08.122864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.292 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.123202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.123233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.123591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.123620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.123982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.124014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.124262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.124294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.124668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.124699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.125047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.125078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.125444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.125476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.125840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.125872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.126186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.126219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.126570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.126602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.126863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.126895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.127228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.127261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.127595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.127626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.127996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.128026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.128394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.128428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.128759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.128791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.129130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.129169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.129408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.129439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.129770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.129801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.130128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.130185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.130504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.130536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.130881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.130913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.131274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.131307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.131642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.131672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.132102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.132134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.132415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.132446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.132802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.132832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.133120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.133153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.133490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.133521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.133903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.133935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.134296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.134330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.134655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.134686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.135067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.135099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.135494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.135528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.135913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.135945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.136267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.136299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.136652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.136684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.293 qpair failed and we were unable to recover it. 00:40:08.293 [2024-11-20 18:07:08.137037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.293 [2024-11-20 18:07:08.137068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.137447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.137480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.137803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.137833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.138197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.138230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.138613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.138643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.139002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.139034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.139393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.139425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.139662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.139691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.140033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.140064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.140444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.140478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.140806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.140837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.141172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.141206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.141593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.141623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.141990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.142023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.142396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.142427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.142835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.142866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.143217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.143249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.143618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.143649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.143978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.144008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.144357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.144388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.144745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.144777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.145130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.145169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.145409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.145445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.145772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.145802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.146138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.146186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.146530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.146562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.146898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.146928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.147253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.147285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.147620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.147653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.148004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.148035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.148394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.148426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.148757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.148788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.149014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.149044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.294 qpair failed and we were unable to recover it. 00:40:08.294 [2024-11-20 18:07:08.149428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.294 [2024-11-20 18:07:08.149459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.149720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.149753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.150076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.150106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.150498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.150530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.150906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.150938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.151287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.151320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.151594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.151624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.151976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.152006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.152347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.152377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.152707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.152738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.153070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.153100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.153447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.153480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.153845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.153877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.154215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.154248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.154572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.154604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.154962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.154993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.155346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.155381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.155649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.155680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.155995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.156025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.156362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.156395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.156743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.156774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.157115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.157145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.157525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.157557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.157886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.157916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.158282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.158315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.158651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.158682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.159024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.159055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.159393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.159425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.159682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.159712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.160031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.160068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.160397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.160430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.160763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.160794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.161142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.161185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.161554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.161584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.161915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.161947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.162302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.162334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.162686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.162718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.295 [2024-11-20 18:07:08.163053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.295 [2024-11-20 18:07:08.163083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.295 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.163427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.163460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.163868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.163899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.164229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.164261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.164500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.164536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.164873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.164906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.165286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.165318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.165679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.165713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.166042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.166073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.166401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.166432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.166791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.166822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.167153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.167201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.167554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.167584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.167921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.167951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.168287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.168320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.168673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.168703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.169031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.169061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.169409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.169439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.169775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.169806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.170137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.170198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.170517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.170549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.170884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.170915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.171244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.171276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.171630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.171659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.171981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.172013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.172386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.172418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.296 [2024-11-20 18:07:08.172761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.296 [2024-11-20 18:07:08.172792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.296 qpair failed and we were unable to recover it. 00:40:08.568 [2024-11-20 18:07:08.173134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.568 [2024-11-20 18:07:08.173178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.568 qpair failed and we were unable to recover it. 00:40:08.568 [2024-11-20 18:07:08.173427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.568 [2024-11-20 18:07:08.173460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.568 qpair failed and we were unable to recover it. 00:40:08.568 [2024-11-20 18:07:08.173823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.568 [2024-11-20 18:07:08.173854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.568 qpair failed and we were unable to recover it. 00:40:08.568 [2024-11-20 18:07:08.174192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.568 [2024-11-20 18:07:08.174225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.568 qpair failed and we were unable to recover it. 00:40:08.568 [2024-11-20 18:07:08.174616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.568 [2024-11-20 18:07:08.174646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.568 qpair failed and we were unable to recover it. 00:40:08.568 [2024-11-20 18:07:08.174976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.568 [2024-11-20 18:07:08.175014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.568 qpair failed and we were unable to recover it. 00:40:08.568 [2024-11-20 18:07:08.175343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.568 [2024-11-20 18:07:08.175375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.568 qpair failed and we were unable to recover it. 00:40:08.568 [2024-11-20 18:07:08.175705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.568 [2024-11-20 18:07:08.175738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.568 qpair failed and we were unable to recover it. 00:40:08.568 [2024-11-20 18:07:08.176084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.568 [2024-11-20 18:07:08.176116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.568 qpair failed and we were unable to recover it. 00:40:08.568 [2024-11-20 18:07:08.176488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.568 [2024-11-20 18:07:08.176520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.568 qpair failed and we were unable to recover it. 00:40:08.568 [2024-11-20 18:07:08.176863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.568 [2024-11-20 18:07:08.176894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.568 qpair failed and we were unable to recover it. 00:40:08.568 [2024-11-20 18:07:08.177238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.568 [2024-11-20 18:07:08.177269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.568 qpair failed and we were unable to recover it. 00:40:08.568 [2024-11-20 18:07:08.177610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.568 [2024-11-20 18:07:08.177643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.568 qpair failed and we were unable to recover it. 00:40:08.568 [2024-11-20 18:07:08.178000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.568 [2024-11-20 18:07:08.178030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.568 qpair failed and we were unable to recover it. 00:40:08.568 [2024-11-20 18:07:08.178350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.568 [2024-11-20 18:07:08.178382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.568 qpair failed and we were unable to recover it. 00:40:08.568 [2024-11-20 18:07:08.178629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.568 [2024-11-20 18:07:08.178663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.568 qpair failed and we were unable to recover it. 00:40:08.568 [2024-11-20 18:07:08.178990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.179022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.179356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.179387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.179770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.179803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.180145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.180186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.180539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.180571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.180899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.180929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.181269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.181301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.181666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.181698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.182049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.182079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.182457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.182490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.182830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.182863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.183205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.183237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.183641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.183672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.184009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.184042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.184392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.184424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.184793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.184825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.185098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.185130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.185490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.185521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.185863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.185893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.186236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.186268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.186611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.186644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.187040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.187071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.187396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.187430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.187779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.187809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.188057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.188090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.188429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.188460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.188813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.188846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.189227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.189260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.189598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.189629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.189966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.190004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.190333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.190365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.190723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.190756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.191058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.191089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.191457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.191490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.191850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.191881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.192234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.192267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.192625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.192655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.569 qpair failed and we were unable to recover it. 00:40:08.569 [2024-11-20 18:07:08.192994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.569 [2024-11-20 18:07:08.193025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.193388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.193418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.193772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.193802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.194155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.194198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.194545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.194576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.194907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.194937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.195286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.195318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.195699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.195730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.196095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.196128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.196501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.196533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.196964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.196995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.197342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.197374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.197607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.197642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.198012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.198042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.198300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.198332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.198573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.198606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.198937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.198970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.199295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.199327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.199684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.199715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.200072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.200104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.200481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.200514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.200845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.200875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.201237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.201269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.201626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.201658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.201987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.202017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.202388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.202422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.202777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.202810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.203175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.203208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.203571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.203602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.203939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.203971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.204327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.204359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.204711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.204742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.205076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.205112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.205507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.205539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.205944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.205975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.206324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.206358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.206691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.570 [2024-11-20 18:07:08.206723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.570 qpair failed and we were unable to recover it. 00:40:08.570 [2024-11-20 18:07:08.207093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.207124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.207482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.207514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.207871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.207903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.208241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.208273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.208604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.208636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.208994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.209025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.209398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.209431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.209758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.209789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.210014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.210047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.210412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.210443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.210812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.210843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.211185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.211216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.211570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.211600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.211946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.211978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.212352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.212384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.212724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.212756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.213123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.213154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.213512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.213543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.213902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.213934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.214189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.214224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.214576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.214610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.214958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.214990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.215355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.215386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.215640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.215671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.216036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.216066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.216308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.216339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.216692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.216723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.217059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.217092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.217427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.217458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.217819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.217851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.218209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.218242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.218576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.218606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.218946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.218976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.219343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.219378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.219623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.219657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.220054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.220092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.220416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.220447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.571 qpair failed and we were unable to recover it. 00:40:08.571 [2024-11-20 18:07:08.220806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.571 [2024-11-20 18:07:08.220838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.221194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.221226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.221565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.221596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.221920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.221950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.222323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.222356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.222706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.222737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.223080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.223113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.223481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.223513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.223871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.223903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.224279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.224310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.224646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.224679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.225045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.225076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.225424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.225458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.225806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.225836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.226207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.226240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.226639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.226670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.227025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.227057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.227414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.227446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.227771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.227801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.228129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.228168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.228546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.228577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.228930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.228964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.229322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.229353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.229682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.229712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.230060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.230092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.230427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.230460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.230836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.230868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.231236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.231268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.231647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.231678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.232037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.232069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.232418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.232450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.232778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.232810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.233176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.233209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.233546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.233577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.233908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.233938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.234278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.234311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.234669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.572 [2024-11-20 18:07:08.234701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.572 qpair failed and we were unable to recover it. 00:40:08.572 [2024-11-20 18:07:08.234947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.234976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.235367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.235407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.235732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.235769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.236124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.236155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.236552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.236583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.236837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.236868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.237236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.237267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.237667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.237698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.237967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.237998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.238328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.238359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.238696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.238728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.239066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.239097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.239459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.239493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.239832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.239863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.240207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.240239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.240617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.240648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.241010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.241042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.241426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.241458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.241830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.241862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.242221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.242252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.242638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.242669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.243005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.243036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.243270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.243302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.243659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.243691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.244047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.244080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.244426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.244458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.244791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.244824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.245187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.245219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.245584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.245617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.245948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.245978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.573 [2024-11-20 18:07:08.246304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.573 [2024-11-20 18:07:08.246336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.573 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.246569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.246600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.246967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.246998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.247359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.247392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.247721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.247754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.248110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.248141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.248499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.248530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.248948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.248980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.249331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.249362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.249708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.249740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.250151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.250196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.250598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.250630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.250964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.250995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.251236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.251269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.251652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.251684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.252037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.252069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.252440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.252473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.252712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.252743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.253092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.253123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.253492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.253524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.253883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.253915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.254276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.254308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.254656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.254688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.255053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.255084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.255421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.255455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.255844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.255875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.256242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.256275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.256651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.256681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.257062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.257095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.257448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.257478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.257732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.257764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.258099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.258131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.258473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.258504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.258839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.258871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.259226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.259258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.259598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.259631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.259960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.259990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.574 qpair failed and we were unable to recover it. 00:40:08.574 [2024-11-20 18:07:08.260342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.574 [2024-11-20 18:07:08.260376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.260737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.260776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.261105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.261137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.261510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.261542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.261777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.261812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.262176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.262209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.262577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.262611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.262942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.262973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.263320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.263351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.263721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.263751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.264116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.264149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.264558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.264590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.264947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.264979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.265327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.265359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.265710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.265742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.266089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.266121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.266483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.266516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.266866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.266898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.267242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.267275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.267608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.267640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.267998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.268029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.268409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.268443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.268794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.268829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.269156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.269195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.269553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.269583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.269962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.269993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.270313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.270345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.270685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.270715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.271076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.271107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.271469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.271505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.271837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.271869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.272239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.272272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.272624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.272655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.273014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.273046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.273282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.273314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.273674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.273706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.274050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.575 [2024-11-20 18:07:08.274081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.575 qpair failed and we were unable to recover it. 00:40:08.575 [2024-11-20 18:07:08.274435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.274467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.274804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.274834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.275072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.275104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.275460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.275492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.275841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.275878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.276206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.276238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.276485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.276516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.276751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.276782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.277129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.277170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.277501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.277532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.277878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.277911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.278318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.278349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.278698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.278729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.279084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.279116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.279495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.279529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.279923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.279954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.280301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.280335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.280731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.280760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.281096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.281128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.281510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.281543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.281902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.281934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.282291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.282323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.282655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.282687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.283042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.283071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.283425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.283459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.283802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.283834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.284214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.284245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.284573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.284605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.284951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.284981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.285238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.285269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.285639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.285670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.286026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.286058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.286315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.286347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.286747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.286779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.287147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.287185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.287462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.287493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.287840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.287870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.576 qpair failed and we were unable to recover it. 00:40:08.576 [2024-11-20 18:07:08.288204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.576 [2024-11-20 18:07:08.288235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.288484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.288516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.288828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.288859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.289196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.289227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.289581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.289612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.289841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.289877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.290225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.290257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.290614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.290652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.290981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.291013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.291387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.291420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.291767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.291798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.292225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.292258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.292491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.292524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.292907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.292939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.293296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.293327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.293679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.293711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.294042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.294073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.294429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.294462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.294785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.294815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.295179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.295212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.295603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.295634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.295991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.296025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.296394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.296425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.296785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.296815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.297190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.297224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.297591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.297623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.297984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.298015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.298391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.298423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.298753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.298784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.299128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.299172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.299556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.299587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.299945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.299977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.577 qpair failed and we were unable to recover it. 00:40:08.577 [2024-11-20 18:07:08.300319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.577 [2024-11-20 18:07:08.300352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.300727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.300759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.301115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.301147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.301534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.301566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.301903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.301935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.302265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.302297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.302727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.302759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.303105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.303138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.303388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.303423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.303773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.303806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.304167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.304199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.304561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.304593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.304930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.304961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.305285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.305316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.305704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.305735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.306090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.306139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.306503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.306536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.306919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.306951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.307300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.307332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.307761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.307791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.308112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.308145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.308508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.308539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.308901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.308933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.309293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.309325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.309659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.309693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.310025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.310055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.310418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.310452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.310809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.310840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.311178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.311209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.311545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.311576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.311934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.311967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.312324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.312356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.312695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.312727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.313054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.313084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.313435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.313470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.313823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.313856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.314232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.314265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.578 qpair failed and we were unable to recover it. 00:40:08.578 [2024-11-20 18:07:08.314586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.578 [2024-11-20 18:07:08.314617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.314979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.315012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.315257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.315289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.315541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.315575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.315908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.315939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.316277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.316310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.316663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.316696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.317029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.317059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.317403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.317437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.317784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.317815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.318182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.318215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.318618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.318649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.318982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.319014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.319374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.319405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.319768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.319800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.320141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.320183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.320581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.320612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.320858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.320888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.321248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.321287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.321616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.321646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.321890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.321924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.322282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.322316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.322671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.322703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.322954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.322984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.323353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.323384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.323750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.323781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.324138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.324179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.324584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.324614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.324942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.324975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.325329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.325362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.325718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.325750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.326091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.326122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.326502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.326533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.326885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.326915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.327277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.327309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.327548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.327581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.327896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.327929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.328275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.579 [2024-11-20 18:07:08.328307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.579 qpair failed and we were unable to recover it. 00:40:08.579 [2024-11-20 18:07:08.328679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.328711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.328957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.328989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.329376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.329409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.329752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.329783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.330132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.330173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.330572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.330604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.330925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.330957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.331318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.331350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.331704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.331737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.332072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.332103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.332531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.332565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.332918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.332950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.333296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.333330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.333601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.333634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.333959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.333990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.334344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.334378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.334733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.334765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.335085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.335117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.335561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.335595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.335940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.335972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.336325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.336364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.336704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.336736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.337115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.337148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.337557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.337588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.337950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.337982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.338326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.338357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.338696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.338727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.339084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.339114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.339464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.339496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.339763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.339793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.340173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.340204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.340565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.340596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.340849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.340879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.341199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.341231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.341563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.341593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.341943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.341975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.580 [2024-11-20 18:07:08.342334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.580 [2024-11-20 18:07:08.342367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.580 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.342688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.342721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.343045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.343077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.343471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.343504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.343837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.343868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.344195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.344225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.344617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.344649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.344989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.345020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.345455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.345486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.345808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.345838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.346179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.346210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.346561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.346591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.346949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.346980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.347322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.347354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.347631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.347661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.348009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.348039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.348413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.348446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.348685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.348715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.348949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.348984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.349331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.349364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.349616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.349646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.349886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.349918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.350252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.350284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.350634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.350664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.351012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.351049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.351438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.351469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.351790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.351819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.352183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.352215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.352574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.352606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.352958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.352989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.353241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.353272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.353632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.353662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.354035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.354067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.354400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.354432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.354666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.354700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.354931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.354964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.355315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.355348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.355698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.355728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.581 qpair failed and we were unable to recover it. 00:40:08.581 [2024-11-20 18:07:08.355959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.581 [2024-11-20 18:07:08.355989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.356383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.356415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.356765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.356795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.357134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.357175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.357540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.357572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.357922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.357952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.358293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.358325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.358676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.358706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.359036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.359068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.359416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.359448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.359802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.359834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.360169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.360203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.360561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.360592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.360961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.360992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.361344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.361378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.361714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.361744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.362094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.362126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.362300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.362332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.362703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.362735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.363077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.363109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.363346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.363378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.363733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.363765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.364118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.364149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.364499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.364531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.364869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.364901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.365235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.365268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.365628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.365666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.366024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.366055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.366394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.366425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.366782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.366813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.367174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.367206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.367497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.367528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.367875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.367907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.582 qpair failed and we were unable to recover it. 00:40:08.582 [2024-11-20 18:07:08.368269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.582 [2024-11-20 18:07:08.368300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.368672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.368702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.369076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.369107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.369512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.369545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.369838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.369867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.370230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.370262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.370603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.370636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.370973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.371004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.371240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.371272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.371637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.371667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.372010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.372040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.372390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.372421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.372662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.372693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.372956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.372986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.373360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.373391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.373832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.373864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.374218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.374251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.374615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.374646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.375003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.375033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.375381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.375414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.375771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.375806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.376174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.376207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.376543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.376574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.376986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.377017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.377367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.377398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.377768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.377798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.378176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.378208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.378445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.378475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.378845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.378877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.379247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.379279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.379650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.379682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.380031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.380063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.380448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.380479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.380826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.380861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.381197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.381231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.381480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.381512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.381776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.381809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.382182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.583 [2024-11-20 18:07:08.382214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.583 qpair failed and we were unable to recover it. 00:40:08.583 [2024-11-20 18:07:08.382457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.382488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.382854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.382885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.383260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.383292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.383660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.383691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.384019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.384051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.384317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.384350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.384582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.384616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.384977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.385008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.385351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.385383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.385731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.385764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.385989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.386025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.386273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.386306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.386533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.386566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.386912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.386943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.387178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.387211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.387598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.387630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.387996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.388028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.388242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.388276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.388424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.388458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.388831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.388863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.389222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.389255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.389626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.389657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.390018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.390052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.390418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.390450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.390708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.390740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.391087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.391124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.391500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.391535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.391756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.391787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.392195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.392228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.392542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.392574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.392937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.392968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.393308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.393340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.393683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.393715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.393959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.393989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.394337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.394370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.394733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.394771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.395042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.395075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.584 [2024-11-20 18:07:08.395415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.584 [2024-11-20 18:07:08.395448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.584 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.395823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.395855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.396267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.396298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.396657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.396690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.397015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.397045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.397412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.397446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.397781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.397811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.398180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.398214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.398469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.398505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.398852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.398883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.399236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.399267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.399510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.399545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.399814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.399845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.399994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.400024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.400195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.400226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.400628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.400660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.400983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.401014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.401393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.401426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.401784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.401817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.402148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.402193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.402579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.402611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.402877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.402906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.403135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.403177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.403546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.403576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.403911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.403944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.404302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.404335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.404695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.404728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.405073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.405104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.405359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.405392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.405788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.405820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.406197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.406229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.406664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.406696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.407066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.407099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.407368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.407403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.407773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.407806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.408174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.408208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.408536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.408568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.585 qpair failed and we were unable to recover it. 00:40:08.585 [2024-11-20 18:07:08.408924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.585 [2024-11-20 18:07:08.408957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.409197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.409240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.409638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.409670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.410011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.410044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.410424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.410457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.410818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.410848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.411104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.411135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.411502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.411535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.411895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.411926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.412355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.412389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.412713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.412744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.413082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.413114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.413503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.413537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.413772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.413802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.414194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.414228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.414572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.414604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.414973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.415004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.415356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.415391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.415726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.415759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.416096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.416128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.416427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.416461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.416800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.416832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.417237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.417269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.417622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.417654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.417890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.417924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.418185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.418217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.418591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.418621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.418970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.419002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.419343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.419375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.419634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.419665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.420035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.420066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.420404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.420437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.420797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.420829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.421195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.421229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.421582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.421613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.421962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.421994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.422366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.422398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.422747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.586 [2024-11-20 18:07:08.422779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.586 qpair failed and we were unable to recover it. 00:40:08.586 [2024-11-20 18:07:08.423104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.423135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.423529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.423562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.423920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.423951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.424352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.424390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.424747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.424780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.425110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.425141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.425541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.425573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.425948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.425979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.426304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.426335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.426687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.426718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.427079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.427110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.427496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.427528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.427889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.427918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.428144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.428187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.428591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.428623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.428958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.428990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.429321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.429353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.429693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.429725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.430058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.430088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.430438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.430472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.430891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.430923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.431256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.431287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.431645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.431677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.432038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.432070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.432422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.432454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.432829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.432860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.433214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.433246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.433609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.433641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.434019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.434050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.434390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.434422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.434818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.434848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.435264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.435297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.587 qpair failed and we were unable to recover it. 00:40:08.587 [2024-11-20 18:07:08.435684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.587 [2024-11-20 18:07:08.435716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.436076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.436107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.436475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.436507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.436873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.436904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.437237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.437270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.437512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.437545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.437908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.437940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.438301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.438334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.438669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.438699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.439039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.439069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.439395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.439425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.439758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.439788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.440169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.440203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.440603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.440634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.440988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.441019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.441385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.441418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.441767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.441796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.442129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.442170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.442526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.442557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.442956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.442988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.443341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.443372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.443696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.443726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.444075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.444106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.444471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.444504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.444837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.444869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.445280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.445313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.445661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.445693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.446063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.446094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.446432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.446466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.446800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.446831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.447189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.447221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.447579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.447609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.447844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.447875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.448277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.448310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.448710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.448743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.449085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.449115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.449535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.449567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.449937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.588 [2024-11-20 18:07:08.449970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.588 qpair failed and we were unable to recover it. 00:40:08.588 [2024-11-20 18:07:08.450322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.450364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.450711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.450742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.451074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.451105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.451437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.451468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.451869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.451899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.452261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.452296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.452672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.452703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.453039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.453070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.453400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.453432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.453775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.453807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.454149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.454190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.454539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.454571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.454928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.454958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.455327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.455358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.455710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.455741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.456116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.456147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.456538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.456570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.456928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.456959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.457295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.457326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.457669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.457700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.458061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.458094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.458460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.458491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.458729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.458760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.459094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.459126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.459479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.459512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.459904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.459935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.460262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.460294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.460628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.460659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.461008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.461039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.461405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.461438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.461780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.461811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.462068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.462101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.462478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.462511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.462859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.462892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.463267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.463299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.463644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.463676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.464035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.464066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.589 qpair failed and we were unable to recover it. 00:40:08.589 [2024-11-20 18:07:08.464426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.589 [2024-11-20 18:07:08.464459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.590 qpair failed and we were unable to recover it. 00:40:08.590 [2024-11-20 18:07:08.464699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.590 [2024-11-20 18:07:08.464731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.590 qpair failed and we were unable to recover it. 00:40:08.590 [2024-11-20 18:07:08.465104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.590 [2024-11-20 18:07:08.465136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.590 qpair failed and we were unable to recover it. 00:40:08.590 [2024-11-20 18:07:08.465381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.590 [2024-11-20 18:07:08.465419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.590 qpair failed and we were unable to recover it. 00:40:08.590 [2024-11-20 18:07:08.465778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.590 [2024-11-20 18:07:08.465808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.590 qpair failed and we were unable to recover it. 00:40:08.590 [2024-11-20 18:07:08.466180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.590 [2024-11-20 18:07:08.466213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.590 qpair failed and we were unable to recover it. 00:40:08.590 [2024-11-20 18:07:08.466572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.590 [2024-11-20 18:07:08.466603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.590 qpair failed and we were unable to recover it. 00:40:08.590 [2024-11-20 18:07:08.466951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.590 [2024-11-20 18:07:08.466983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.590 qpair failed and we were unable to recover it. 00:40:08.590 [2024-11-20 18:07:08.467344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.590 [2024-11-20 18:07:08.467376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.590 qpair failed and we were unable to recover it. 00:40:08.590 [2024-11-20 18:07:08.467605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.590 [2024-11-20 18:07:08.467636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.590 qpair failed and we were unable to recover it. 00:40:08.590 [2024-11-20 18:07:08.467959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.590 [2024-11-20 18:07:08.467990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.590 qpair failed and we were unable to recover it. 00:40:08.590 [2024-11-20 18:07:08.468343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.590 [2024-11-20 18:07:08.468376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.590 qpair failed and we were unable to recover it. 00:40:08.590 [2024-11-20 18:07:08.468737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.590 [2024-11-20 18:07:08.468769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.590 qpair failed and we were unable to recover it. 00:40:08.590 [2024-11-20 18:07:08.469114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.590 [2024-11-20 18:07:08.469147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.590 qpair failed and we were unable to recover it. 00:40:08.590 [2024-11-20 18:07:08.469489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.590 [2024-11-20 18:07:08.469519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.590 qpair failed and we were unable to recover it. 00:40:08.590 [2024-11-20 18:07:08.469873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.590 [2024-11-20 18:07:08.469903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.590 qpair failed and we were unable to recover it. 00:40:08.865 [2024-11-20 18:07:08.470276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.865 [2024-11-20 18:07:08.470312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.865 qpair failed and we were unable to recover it. 00:40:08.865 [2024-11-20 18:07:08.470650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.865 [2024-11-20 18:07:08.470681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.865 qpair failed and we were unable to recover it. 00:40:08.865 [2024-11-20 18:07:08.470921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.865 [2024-11-20 18:07:08.470951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.865 qpair failed and we were unable to recover it. 00:40:08.865 [2024-11-20 18:07:08.471215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.865 [2024-11-20 18:07:08.471246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.865 qpair failed and we were unable to recover it. 00:40:08.865 [2024-11-20 18:07:08.471645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.865 [2024-11-20 18:07:08.471677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.865 qpair failed and we were unable to recover it. 00:40:08.865 [2024-11-20 18:07:08.472002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.865 [2024-11-20 18:07:08.472032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.865 qpair failed and we were unable to recover it. 00:40:08.865 [2024-11-20 18:07:08.472390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.865 [2024-11-20 18:07:08.472422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.865 qpair failed and we were unable to recover it. 00:40:08.865 [2024-11-20 18:07:08.472780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.472813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.473178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.473211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.473575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.473605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.473971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.474001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.474249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.474279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.474639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.474671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.475000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.475030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.475397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.475429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.475797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.475829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.476194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.476226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.476451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.476481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.476812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.476845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.477215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.477247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.477606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.477637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.478015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.478046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.478388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.478421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.478796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.478828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.479189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.479222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.479576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.479608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.479944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.479974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.480329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.480369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.480758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.480789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.481123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.481156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.481541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.481571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.481927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.481959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.482318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.482351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.482754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.482785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.483153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.483196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.483542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.483574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.483929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.483961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.484297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.484328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.866 qpair failed and we were unable to recover it. 00:40:08.866 [2024-11-20 18:07:08.484672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.866 [2024-11-20 18:07:08.484703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.485077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.485108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.485480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.485513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.485895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.485925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.486177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.486211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.486567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.486598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.486953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.486985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.487323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.487355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.487699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.487732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.488098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.488129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.488503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.488537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.488867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.488897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.489267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.489300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.489647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.489677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.490031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.490063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.490394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.490426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.490801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.490833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.491184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.491216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.491572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.491603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.491935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.491968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.492305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.492337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.492699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.492731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.493154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.493198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.493599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.493631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.493873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.493905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.494271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.494304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.494661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.494694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.495031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.495062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.867 qpair failed and we were unable to recover it. 00:40:08.867 [2024-11-20 18:07:08.495444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.867 [2024-11-20 18:07:08.495477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.495823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.495863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.496213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.496245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.496588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.496620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.496960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.496991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.497316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.497347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.497718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.497749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.498089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.498122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.498492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.498524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.498838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.498869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.499235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.499266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.499604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.499636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.500004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.500035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.500402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.500435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.500793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.500825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.501190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.501224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.501592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.501622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.502041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.502072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.502394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.502428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.502803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.502833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.503228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.503260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.503624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.503655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.504014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.504045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.504413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.504445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.504793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.504824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.505179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.505211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.505563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.505593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.505983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.506014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.506413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.506447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.506799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.868 [2024-11-20 18:07:08.506829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.868 qpair failed and we were unable to recover it. 00:40:08.868 [2024-11-20 18:07:08.507175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.507207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.507604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.507635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.507966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.507998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.508366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.508398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.508744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.508776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.509112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.509142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.509484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.509516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.509872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.509903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.510266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.510299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.510636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.510666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.511046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.511078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.511465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.511505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.511857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.511888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.512226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.512258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.512624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.512654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.512987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.513020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.513377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.513409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.513738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.513768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.514108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.514139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.514525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.514557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.514921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.514952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.515206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.515237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.515593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.515625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.515975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.516006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.516388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.516428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.516795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.516825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.517157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.517199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.517443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.517476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.517854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.517885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.518219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.518250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.518623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.518655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.519006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.519037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.519395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.869 [2024-11-20 18:07:08.519427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.869 qpair failed and we were unable to recover it. 00:40:08.869 [2024-11-20 18:07:08.519774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.519807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.520138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.520178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.520504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.520535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.520928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.520959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.521302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.521335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.521682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.521712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.522074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.522107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.522449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.522482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.522855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.522888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.523240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.523273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.523645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.523676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.524030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.524061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.524402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.524434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.524759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.524790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.525146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.525191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.525611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.525642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.525973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.526005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.526358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.526390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.526746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.526790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.527141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.527184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.527583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.527613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.527936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.527969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.528210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.528246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.528595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.528626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.528965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.528997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.529234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.529266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.529616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.529648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.530008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.530040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.530395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.530428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.530756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.870 [2024-11-20 18:07:08.530785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.870 qpair failed and we were unable to recover it. 00:40:08.870 [2024-11-20 18:07:08.531153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.531194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.531569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.531600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.531935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.531967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.532301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.532334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.532697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.532729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.533096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.533128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.533495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.533527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.533891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.533924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.534290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.534322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.534689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.534719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.535050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.535080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.535436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.535469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.535829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.535860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.536217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.536251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.536611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.536642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.537029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.537061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.537412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.537443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.537806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.537838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.538178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.538211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.538482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.538514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.538861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.538893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.539233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.539266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.539605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.539635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.539972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.540002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.540392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.540424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.540768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.540799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.871 [2024-11-20 18:07:08.541128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.871 [2024-11-20 18:07:08.541170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.871 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.541521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.541551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.541801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.541841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.542190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.542222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.542578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.542610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.542980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.543011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.543377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.543408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.543828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.543861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.544194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.544227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.544556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.544589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.544949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.544982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.545345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.545377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.545722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.545754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.546120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.546152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.546514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.546545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.546896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.546928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.547277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.547308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.547641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.547670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.547907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.547941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.548299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.548333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.548580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.548611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.548992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.549025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.549387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.549418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.549779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.549811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.550149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.550190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.550541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.550573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.550930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.550960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.551321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.551355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.551717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.551748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.552088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.552121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.552493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.552527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.552883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.552915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.553321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.553353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.553601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.872 [2024-11-20 18:07:08.553635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.872 qpair failed and we were unable to recover it. 00:40:08.872 [2024-11-20 18:07:08.553995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.554026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.554400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.554433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.554841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.554873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.555228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.555261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.555617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.555648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.556049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.556081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.556408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.556442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.556766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.556796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.557170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.557211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.557585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.557617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.557960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.557990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.558359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.558390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.558751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.558783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.559136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.559176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.559575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.559606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.559928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.559962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.560326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.560359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.560705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.560736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.560984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.561015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.561384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.561415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.561783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.561815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.562179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.562214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.562570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.562602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.562968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.563000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.563342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.563376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.563740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.563771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.564124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.564156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.564524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.564555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.564905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.564937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.565281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.565315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.565591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.565622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.873 qpair failed and we were unable to recover it. 00:40:08.873 [2024-11-20 18:07:08.565943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.873 [2024-11-20 18:07:08.565973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.566319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.566353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.566711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.566741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.567127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.567167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.567581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.567612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.567965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.567995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.568339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.568372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.568619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.568649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.569019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.569050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.569417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.569450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.569811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.569842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.570181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.570214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.570615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.570647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.570993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.571025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.571388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.571421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.571753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.571786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.572112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.572143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.572527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.572564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.572955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.572988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.573332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.573364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.573711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.573744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.574178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.574212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.574563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.574594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.574845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.574875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.575233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.575266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.575633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.575664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.576021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.576055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.576310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.576343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.576683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.576717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.576943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.576974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.874 qpair failed and we were unable to recover it. 00:40:08.874 [2024-11-20 18:07:08.577325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.874 [2024-11-20 18:07:08.577357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.577724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.577754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.578095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.578125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.578381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.578413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.578765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.578798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.579200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.579233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.579633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.579664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.580032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.580063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.580422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.580455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.580802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.580834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.581214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.581247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.581620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.581650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.581899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.581929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.582308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.582339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.582688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.582720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.583079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.583111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.583488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.583522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.583853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.583885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.584131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.584181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.584565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.584597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.584961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.584994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.585387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.585420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.585762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.585794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.586033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.586064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.586425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.586459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.586795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.586827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.587166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.587199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.587569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.587606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.587846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.587877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.588122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.588157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.588536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.588567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.588798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.588828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.875 [2024-11-20 18:07:08.589105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.875 [2024-11-20 18:07:08.589137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.875 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.589504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.589536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.589869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.589899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.590256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.590289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.590652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.590681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.591042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.591072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.591411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.591446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.591691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.591724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.592081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.592112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.592398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.592432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.592768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.592798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.593176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.593208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.593561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.593593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.593970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.594001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.594331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.594362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.594730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.594761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.595012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.595042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.595408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.595442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.595778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.595809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.596192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.596224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.596587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.596619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.596866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.596899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.597251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.597284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.597665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.597695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.597970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.598001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.876 qpair failed and we were unable to recover it. 00:40:08.876 [2024-11-20 18:07:08.598332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.876 [2024-11-20 18:07:08.598363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.598697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.598728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.599095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.599126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.599499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.599533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.599861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.599892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.600218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.600251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.600617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.600650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.601016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.601046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.601411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.601443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.601763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.601795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.602180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.602223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.602455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.602489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.602810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.602841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.603180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.603211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.603596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.603627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.603869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.603899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.604299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.604332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.604672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.604705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.604923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.604956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.605338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.605371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.605709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.605740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.606108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.606138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.606477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.606509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.606876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.606909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.607244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.607276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.607524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.607557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.607919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.607950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.608306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.608338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.608689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.608722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.609056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.609086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.877 [2024-11-20 18:07:08.609437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.877 [2024-11-20 18:07:08.609471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.877 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.609815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.609848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.610189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.610221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.610552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.610582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.610926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.610956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.611322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.611354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.611700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.611731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.612103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.612136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.612518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.612551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.612922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.612953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.613282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.613315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.613545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.613578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.613922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.613955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.614275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.614307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.614664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.614696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.615017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.615047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.615313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.615345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.615686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.615717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.616051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.616082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.616410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.616442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.616788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.616824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.617192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.617225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.617588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.617618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.617936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.617967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.618357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.618388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.618774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.618807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.619155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.619198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.619528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.619561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.619929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.619962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.620326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.620360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.620739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.620771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.621124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.621155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.621520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.621552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.621911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.878 [2024-11-20 18:07:08.621943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.878 qpair failed and we were unable to recover it. 00:40:08.878 [2024-11-20 18:07:08.622278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.622310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.622556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.622587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.622941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.622972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.623324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.623358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.623681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.623713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.624048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.624080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.624413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.624444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.624810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.624841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.625180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.625213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.625615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.625645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.625989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.626022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.626255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.626291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.626561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.626591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.626995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.627026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.627415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.627449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.627797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.627829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.628065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.628096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.628466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.628499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.628853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.628885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.629120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.629152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.629482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.629514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.629852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.629884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.630015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.630044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.630291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.630327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.630668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.630700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.631096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.631127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.631364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.631398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.631771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.631805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.632049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.879 [2024-11-20 18:07:08.632079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.879 qpair failed and we were unable to recover it. 00:40:08.879 [2024-11-20 18:07:08.632391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.632423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.632798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.632829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.633196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.633227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.633556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.633587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.633915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.633946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.634311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.634343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.634684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.634715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.635041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.635074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.635412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.635445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.635794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.635826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.636170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.636203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.636569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.636600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.636951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.636982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.637336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.637368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.637586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.637617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.637974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.638005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.638329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.638361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.638676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.638709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.639071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.639102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.639479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.639513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.639779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.639814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.640178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.640211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.640566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.640596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.640923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.640955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.641196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.641234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.641594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.641626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.641875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.641906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.642272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.642306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.642661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.642691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.643056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.643087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.643419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.643451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.643779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.643809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.644153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.644196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.644428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.880 [2024-11-20 18:07:08.644463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.880 qpair failed and we were unable to recover it. 00:40:08.880 [2024-11-20 18:07:08.644722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.644756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.645101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.645132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.645502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.645534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.645888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.645919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.646280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.646312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.646672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.646706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.647031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.647062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.647449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.647484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.647842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.647873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.648239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.648273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.648517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.648552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.648783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.648814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.649180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.649212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.649442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.649475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.649847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.649879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.650220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.650253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.650496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.650527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.650935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.650967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.651303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.651337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.651683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.651715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.652063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.652094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.652433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.652464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.652794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.652824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.653181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.653214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.653572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.653604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.653925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.653958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.654312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.654343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.654717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.654750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.881 [2024-11-20 18:07:08.657135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.881 [2024-11-20 18:07:08.657221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.881 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.657634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.657673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.658026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.658067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.658409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.658443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.658784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.658815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.659100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.659134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.660441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.660495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.660880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.660914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.661277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.661318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.661658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.661691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.662030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.662062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.664538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.664615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.665027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.665066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.665472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.665506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.665834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.665868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.666195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.666227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.666580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.666615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.666944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.666976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.667312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.667346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.667701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.667732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.668079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.668111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.668346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.668378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.668709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.668741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.669074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.669107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.669464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.669497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.669822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.882 [2024-11-20 18:07:08.669855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.882 qpair failed and we were unable to recover it. 00:40:08.882 [2024-11-20 18:07:08.670189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.670222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.670521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.670551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.670876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.670907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.671242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.671278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.671528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.671558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.671910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.671942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.672286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.672318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.672655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.672686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.673033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.673065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.673415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.673454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.673827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.673858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.674191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.674226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.674594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.674626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.674965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.674998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.675330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.675363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.677763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.677832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.678145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.678212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.678571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.678602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.678854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.678885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.679254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.679288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.679653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.679684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.680016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.680046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.680408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.680441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.680786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.680816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.681152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.681196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.681559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.681590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.681926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.681958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.883 [2024-11-20 18:07:08.682296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.883 [2024-11-20 18:07:08.682328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.883 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.682658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.682691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.683025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.683056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.683406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.683440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.683774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.683806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.684195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.684228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.684564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.684594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.684934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.684965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.685302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.685336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.685662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.685695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.686037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.686069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.686435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.686467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.686803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.686834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.687172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.687208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.687552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.687584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.687926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.687959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.688298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.688330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.688684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.688715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.689068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.689101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.689511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.689545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.689881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.689913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.690253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.690286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.690633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.690666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.691007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.691041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.691385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.691416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.691789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.691821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.692134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.692176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.692537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.692568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.692957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.692990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.884 [2024-11-20 18:07:08.693375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.884 [2024-11-20 18:07:08.693414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.884 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.693751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.693782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.694128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.694170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.694494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.694527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.694869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.694900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.695251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.695284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.695660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.695692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.696045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.696076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.696425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.696457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.696798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.696833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.697077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.697108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.697483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.697519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.697898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.697930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.698269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.698301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.698650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.698683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.699020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.699053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.699396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.699429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.699778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.699810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.700171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.700205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.700606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.700638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.700968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.701000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.701339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.701371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.701623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.701657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.701984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.702015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.702363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.702398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.702753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.702783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.703144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.703188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.705038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.705101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.705544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.705580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.705928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.705959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.706186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.706220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.706595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.706627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.885 [2024-11-20 18:07:08.707000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.885 [2024-11-20 18:07:08.707030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.885 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.707388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.707419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.707805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.707837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.708235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.708268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.708612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.708643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.708979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.709008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.709342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.709373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.709774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.709804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.710211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.710249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.710599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.710629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.711005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.711034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.711429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.711464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.711830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.711861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.712196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.712227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.712610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.712640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.713009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.713038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.713388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.713420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.713779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.713809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.714148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.714191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.714582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.714612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.714977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.715008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.715376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.715408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.715803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.715832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.716196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.716226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.716500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.716530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.716887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.716915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.717283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.717316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.717706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.717734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.718067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.718097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.718450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.718481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.718848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.718877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.719238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.719270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.886 [2024-11-20 18:07:08.719618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.886 [2024-11-20 18:07:08.719647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.886 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.720014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.720045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.720413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.720444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.720787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.720818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.721182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.721212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.721604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.721633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.722001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.722030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.722399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.722430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.722772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.722801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.723151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.723192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.723432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.723461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.723839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.723869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.724237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.724267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.724622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.724651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.725016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.725045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.725387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.725420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.725755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.725792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.726137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.726178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.726520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.726548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.726803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.726836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.727196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.727227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.727623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.727654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.728024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.728054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.728401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.728435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.728774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.728802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.729238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.729268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.729620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.729650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.730001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.730030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.730383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.730414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.730714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.887 [2024-11-20 18:07:08.730742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.887 qpair failed and we were unable to recover it. 00:40:08.887 [2024-11-20 18:07:08.730995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.731027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.731394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.731425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.731790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.731819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.732073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.732103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.732454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.732485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.732845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.732874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.733223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.733255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.733592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.733622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.733980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.734011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.734371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.734402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.734786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.734816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.735149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.735190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.735545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.735574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.735890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.735920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.736207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.736237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.736607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.736636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.736997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.737025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.737383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.737414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.737740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.737769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.738134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.738177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.738538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.738567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.738930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.738959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.739298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.739329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.739677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.739705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.740071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.740101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.888 [2024-11-20 18:07:08.740356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.888 [2024-11-20 18:07:08.740386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.888 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.740727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.740761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.741123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.741151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.741531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.741562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.741928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.741957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.742295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.742325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.742691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.742722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.743079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.743107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.743474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.743504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.743844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.743873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.744247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.744277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.744619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.744648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.744883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.744916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.745284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.745316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.745668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.745696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.745992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.746020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.746386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.746417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.746735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.746765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.747093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.747121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.747481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.747513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.747766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.747799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.748131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.748170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.748522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.748551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.748919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.748948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.749354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.749385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.749746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.749774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.750143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.750182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.750564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.750593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.750939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.750969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.751330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.751362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.751702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.751730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.752111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.752141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.889 [2024-11-20 18:07:08.752500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.889 [2024-11-20 18:07:08.752530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.889 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.752882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.752911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.753252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.753282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.753647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.753675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.754049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.754077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.754412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.754443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.754772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.754800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.755233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.755265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.755615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.755643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.756015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.756052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.756392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.756423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.756775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.756803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.757180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.757211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.757600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.757629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.757977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.758007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.758385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.758415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.758775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.758806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.759140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.759180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.759585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.759615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.759961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.759992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.760351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.760381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.760756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.760786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.761126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.761156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.761537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.761569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.761917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.761947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.762280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.762310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.762671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.762701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.762960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.762989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.763227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.763256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.763658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.763688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.764017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.764045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:08.890 qpair failed and we were unable to recover it. 00:40:08.890 [2024-11-20 18:07:08.764393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:08.890 [2024-11-20 18:07:08.764423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.167 qpair failed and we were unable to recover it. 00:40:09.167 [2024-11-20 18:07:08.764798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.167 [2024-11-20 18:07:08.764829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.167 qpair failed and we were unable to recover it. 00:40:09.167 [2024-11-20 18:07:08.765177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.167 [2024-11-20 18:07:08.765208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.167 qpair failed and we were unable to recover it. 00:40:09.167 [2024-11-20 18:07:08.765593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.167 [2024-11-20 18:07:08.765623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.167 qpair failed and we were unable to recover it. 00:40:09.167 [2024-11-20 18:07:08.765982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.167 [2024-11-20 18:07:08.766012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.167 qpair failed and we were unable to recover it. 00:40:09.167 [2024-11-20 18:07:08.766250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.167 [2024-11-20 18:07:08.766281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.167 qpair failed and we were unable to recover it. 00:40:09.167 [2024-11-20 18:07:08.766565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.167 [2024-11-20 18:07:08.766595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.167 qpair failed and we were unable to recover it. 00:40:09.167 [2024-11-20 18:07:08.766966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.167 [2024-11-20 18:07:08.766998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.167 qpair failed and we were unable to recover it. 00:40:09.167 [2024-11-20 18:07:08.767356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.167 [2024-11-20 18:07:08.767387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.167 qpair failed and we were unable to recover it. 00:40:09.167 [2024-11-20 18:07:08.767755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.167 [2024-11-20 18:07:08.767784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.167 qpair failed and we were unable to recover it. 00:40:09.167 [2024-11-20 18:07:08.768038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.167 [2024-11-20 18:07:08.768070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.167 qpair failed and we were unable to recover it. 00:40:09.167 [2024-11-20 18:07:08.768463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.167 [2024-11-20 18:07:08.768493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.167 qpair failed and we were unable to recover it. 00:40:09.167 [2024-11-20 18:07:08.768849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.167 [2024-11-20 18:07:08.768878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.167 qpair failed and we were unable to recover it. 00:40:09.167 [2024-11-20 18:07:08.769193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.769225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.769605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.769636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.769971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.770001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.770297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.770329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.770676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.770705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.771060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.771095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.771442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.771472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.771746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.771775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.772119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.772149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.772500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.772529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.772892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.772921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.773212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.773244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.773494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.773523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.773897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.773927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.774317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.774348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.774726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.774755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.775109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.775138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.775388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.775421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.775768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.775800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.776180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.776210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.776664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.776693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.777030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.777059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.777404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.777435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.777800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.777831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.778179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.778210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.778605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.778633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.779008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.779038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.779400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.779431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.779787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.779816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.780198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.780229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.780613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.780642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.781006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.781036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.781397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.781428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.781771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.781800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.782172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.782204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.782621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.782650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.783022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.168 [2024-11-20 18:07:08.783050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.168 qpair failed and we were unable to recover it. 00:40:09.168 [2024-11-20 18:07:08.783385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.783414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.783647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.783676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.784029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.784056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.784317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.784350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.784725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.784754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.785106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.785134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.785548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.785579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.785930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.785959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.786320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.786358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.786712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.786741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.786996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.787025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.787395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.787427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.787788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.787817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.788170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.788200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.788549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.788588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.788973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.789002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.789370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.789400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.789780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.789809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.790194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.790223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.790601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.790629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.790792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.790826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.791212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.791243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.791602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.791632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.792007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.792036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.792370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.792401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.792764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.792793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.793273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.793304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.793667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.793695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.794037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.794065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.794444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.794475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.794839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.794866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.795219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.795249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.795583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.795611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.795949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.795977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.796342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.796372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.796653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.796682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.797054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.797082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.169 [2024-11-20 18:07:08.797439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.169 [2024-11-20 18:07:08.797468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.169 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.797822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.797851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.798193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.798224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.798628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.798657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.798940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.798968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.799336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.799366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.799714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.799743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.800115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.800143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.800549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.800579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.800944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.800972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.801327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.801358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.801596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.801631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.801994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.802023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.802372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.802402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.802762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.802791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.803141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.803183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.803519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.803548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.803925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.803954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.804237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.804267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.804653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.804684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.805053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.805082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.805455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.805485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.805838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.805867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.806225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.806255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.806600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.806628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.806987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.807016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.807390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.807419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.807756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.807784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.808145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.808185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.808547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.808576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.808933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.808961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.809320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.809352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.809732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.809761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.810124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.810152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.810516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.810545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.810891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.810919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.811286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.811318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.811573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.170 [2024-11-20 18:07:08.811601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.170 qpair failed and we were unable to recover it. 00:40:09.170 [2024-11-20 18:07:08.811947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.811983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.812250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.812280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.812632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.812660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.813011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.813041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.813382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.813412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.813757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.813785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.814120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.814148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.814517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.814547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.814902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.814931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.815299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.815329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.815466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.815498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.815837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.815866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.816210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.816240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.816600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.816634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.816947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.816976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.817314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.817344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.817695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.817723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.818141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.818186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.818531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.818560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.818909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.818937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.819281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.819311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.819639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.819668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.820016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.820045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.820405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.820435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.820771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.820800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.821157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.821196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.821531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.821559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.821926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.821955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.822188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.822221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.822614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.822643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.822958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.822986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.823325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.823355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.823729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.823758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.824082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.824110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.824460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.824490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.824837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.824865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.825214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.825243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.171 [2024-11-20 18:07:08.825628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.171 [2024-11-20 18:07:08.825656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.171 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.826065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.826093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.826425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.826455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.826801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.826831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.827203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.827234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.827612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.827640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.827982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.828011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.828333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.828363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.828671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.828699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.829029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.829058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.829420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.829450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.829799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.829828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.830064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.830092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.830323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.830353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.830702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.830730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.831061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.831090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.831414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.831443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.831784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.831814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.832182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.832212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.832545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.832573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.832899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.832929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.833277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.833307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.833652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.833680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.834010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.834038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.834368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.834398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.834710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.834739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.835082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.835110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.835447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.835477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.835790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.835819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.836057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.836086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.836462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.172 [2024-11-20 18:07:08.836493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.172 qpair failed and we were unable to recover it. 00:40:09.172 [2024-11-20 18:07:08.836825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.836855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.837200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.837229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.837497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.837526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.837878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.837907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.838246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.838277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.838544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.838573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.838920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.838948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.839294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.839323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.839656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.839688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.840042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.840071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.840387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.840417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.840673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.840702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.841046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.841079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.841402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.841432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.841775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.841804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.842169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.842199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.842529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.842558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.842871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.842900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.843215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.843244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.843622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.843651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.843995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.844024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.844362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.844392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.844705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.844735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.845074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.845103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.845439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.845469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.845780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.845809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.846027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.846056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.846407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.846438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.846764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.846794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.847176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.847207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.847544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.847573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.847917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.847945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.848289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.848320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.848544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.848573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.848912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.848941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.849283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.849313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.849659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.849688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.173 [2024-11-20 18:07:08.850008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.173 [2024-11-20 18:07:08.850037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.173 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.850370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.850401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.850747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.850776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.851125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.851154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.851406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.851439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.851746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.851775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.852013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.852042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.852431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.852462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.852787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.852815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.853064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.853097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.853496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.853527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.853895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.853924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.854151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.854191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.854505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.854533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.854876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.854905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.855250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.855286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.855596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.855626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.855931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.855961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.856210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.856240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.856572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.856601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.856833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.856865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.857226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.857257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.857574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.857603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.857930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.857959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.858282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.858313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.858643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.858673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.859040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.859068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.859299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.859330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.859685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.859714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.860031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.860062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.860395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.860425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.860776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.860806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.861169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.861200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.861422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.861451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.861803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.861832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.862176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.862206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.862571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.862601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.174 [2024-11-20 18:07:08.862918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.174 [2024-11-20 18:07:08.862947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.174 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.863281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.863311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.863668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.863697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.864018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.864047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.864378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.864408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.864739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.864769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.865000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.865032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.865385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.865416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.865789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.865818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.866054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.866082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.866422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.866452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.866774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.866803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.867120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.867149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.867497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.867526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.867896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.867925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.868265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.868295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.868665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.868694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.869049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.869077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.869495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.869531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.869847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.869876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.870220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.870250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.870597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.870626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.870980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.871008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.871366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.871396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.871649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.871678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.871928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.871960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.872280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.872310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.872675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.872704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.873029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.873058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.873397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.873427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.873759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.873789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.874122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.874151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.874489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.874521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.874906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.874935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.875319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.875349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.875676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.875706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.875917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.875950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.876288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.175 [2024-11-20 18:07:08.876318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.175 qpair failed and we were unable to recover it. 00:40:09.175 [2024-11-20 18:07:08.876662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.876691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.877024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.877053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.877397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.877427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.877757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.877787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.878194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.878224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.878561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.878590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.878893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.878921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.879297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.879328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.879655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.879685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.880019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.880049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.880381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.880411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.880779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.880809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.881144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.881196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.881543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.881572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.881905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.881934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.882305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.882335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.882675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.882706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.883016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.883046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.883400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.883430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.883768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.883797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.884138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.884186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.884506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.884536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.884867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.884897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.885242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.885273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.885631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.885660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.885972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.886001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.886364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.886393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.886720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.886749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.887086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.887114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.887334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.887364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.887687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.887715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.888072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.888100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.888356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.888385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.176 qpair failed and we were unable to recover it. 00:40:09.176 [2024-11-20 18:07:08.888695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.176 [2024-11-20 18:07:08.888723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.889046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.889074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.889415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.889447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.889773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.889803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.890130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.890168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.890499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.890527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.890911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.890939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.891269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.891297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.891541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.891573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.891903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.891932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.892272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.892301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.892649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.892677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.892909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.892937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.893182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.893212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.893580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.893609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.893950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.893979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.894326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.894356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.894671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.894699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.895045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.895074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.895414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.895444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.895788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.895817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.896139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.896175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.896511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.896540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.896786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.896813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.897199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.897231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.897566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.897594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.897930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.897958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.898387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.898423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.898774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.898802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.899104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.899132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.899450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.899479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.899842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.899870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.900202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.900231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.900462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.900491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.900834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.900862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.901189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.901220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.901541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.901570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.177 [2024-11-20 18:07:08.901941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.177 [2024-11-20 18:07:08.901969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.177 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.902304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.902334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.902674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.902703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.902913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.902941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.903186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.903215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.903576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.903605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.903953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.903981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.904321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.904350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.904501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.904530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.904880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.904909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.905274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.905304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.905628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.905656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.905986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.906014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.906244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.906274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.906618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.906647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.906997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.907025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.907415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.907445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.907827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.907857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.908178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.908206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.908533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.908562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.908902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.908930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.909277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.909306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.909644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.909673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.910035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.910064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.910442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.910472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.910821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.910849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.911209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.911238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.911558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.911586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.911903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.911932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.912273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.912302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.912636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.912669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.912991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.913019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.913363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.913393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.913737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.913765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.914134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.914171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.914495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.914524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.914864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.914893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.915235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.178 [2024-11-20 18:07:08.915266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.178 qpair failed and we were unable to recover it. 00:40:09.178 [2024-11-20 18:07:08.915618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.915646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.915992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.916022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.916364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.916394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.916705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.916733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.917082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.917111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.917434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.917464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.917783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.917812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.918168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.918199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.918539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.918569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.918930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.918958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.919286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.919317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.919656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.919684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.920032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.920061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.920396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.920425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.920794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.920823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.921154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.921194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.921540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.921568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.921807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.921836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.922070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.922102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.922497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.922528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.922871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.922898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.923222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.923252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.923574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.923602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.923936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.923965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.924307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.924337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.924651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.924678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.925004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.925033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.925369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.925398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.925738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.925767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.926034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.926061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.926405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.926435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.926766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.926794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.927132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.927173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.927518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.927547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.927865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.927893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.928242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.928271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.928633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.928662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.179 [2024-11-20 18:07:08.928981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.179 [2024-11-20 18:07:08.929010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.179 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.929358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.929388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.929731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.929759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.930101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.930130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.930480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.930510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.930807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.930841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.931187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.931217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.931551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.931579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.931915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.931943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.932287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.932316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.932663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.932691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.933057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.933085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.933439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.933468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.933805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.933834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.934247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.934276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.934599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.934627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.934980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.935008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.935376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.935407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.935739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.935768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.936086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.936114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.936483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.936514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.936847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.936875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.937238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.937268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.937615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.937644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.937958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.937985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.938317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.938346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.938674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.938702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.939073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.939101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.939443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.939475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.939804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.939832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.940191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.940221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.940564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.940592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.940919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.940947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.941300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.941330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.941683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.941711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.942042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.942076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.942451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.942480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.180 [2024-11-20 18:07:08.942757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.180 [2024-11-20 18:07:08.942785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.180 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.943143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.943180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.943601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.943629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.943991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.944019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.944407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.944437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.944789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.944818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.945169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.945199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.945526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.945555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.945880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.945909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.946253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.946283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.946527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.946556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.946895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.946924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.947271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.947302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.947646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.947674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.948097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.948125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.948437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.948467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.948797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.948826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.949207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.949237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.949592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.949620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.949863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.949891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.950246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.950275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.950616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.950645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.950988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.951016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.951381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.951410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.951751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.951780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.952124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.952152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.952513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.952542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.952908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.952937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.953282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.953312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.953668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.953696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.954037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.954065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.954436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.954465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.181 [2024-11-20 18:07:08.954802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.181 [2024-11-20 18:07:08.954832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.181 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.955205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.955235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.955592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.955620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.955961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.955997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.956344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.956374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.956722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.956752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.957079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.957113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.957459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.957489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.957821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.957848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.958190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.958220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.958634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.958662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.958985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.959013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.959308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.959336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.959685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.959713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.959935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.959967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.960279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.960309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.960647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.960675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.961103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.961131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.961461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.961490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.961860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.961888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.962138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.962175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.962510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.962539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.962964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.962992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.963359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.963389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.963684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.963713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.964045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.964073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.964317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.964345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.964661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.964689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.965035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.965063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.965415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.965444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.965793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.965822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.966062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.966091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.966447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.966477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.966789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.966819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.967209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.967239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.967582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.967611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.967957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.967985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.968332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.182 [2024-11-20 18:07:08.968362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.182 qpair failed and we were unable to recover it. 00:40:09.182 [2024-11-20 18:07:08.968705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.968733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.969048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.969076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.969446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.969476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.969801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.969831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.970184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.970214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.970539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.970568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.970886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.970915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.971243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.971272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.971632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.971666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.972007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.972036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.972380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.972409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.972635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.972667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.972990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.973020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.973371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.973400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.973738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.973768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.974064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.974093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.974450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.974480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.974813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.974841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.975173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.975202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.975547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.975576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.975857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.975884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.976196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.976225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.976612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.976641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.976985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.977014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.977374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.977404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.977716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.977745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.978076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.978105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.978429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.978459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.978806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.978834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.979142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.979179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.979527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.979555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.979901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.979929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.980276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.980305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.980530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.980562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.980948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.980977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.981309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.981340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.981728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.981758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.183 qpair failed and we were unable to recover it. 00:40:09.183 [2024-11-20 18:07:08.982110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.183 [2024-11-20 18:07:08.982139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.982482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.982512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.982882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.982911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.983244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.983274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.983620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.983649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.983885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.983913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.984255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.984285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.984645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.984674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.985002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.985029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.985420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.985449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.985786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.985814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.986182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.986217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.986576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.986605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.986973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.987001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.987327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.987357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.987703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.987733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.988099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.988127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.988459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.988489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.988835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.988863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.989199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.989228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.989600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.989629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.989958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.989986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.990331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.990361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.990711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.990740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.991103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.991134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.991491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.991522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.991941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.991970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.992302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.992332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.992670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.992699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.993032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.993059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.993359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.993388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.993734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.993762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.994083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.994110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.994486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.994516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.994748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.994776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.995117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.995145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.995464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.995493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.184 [2024-11-20 18:07:08.995836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.184 [2024-11-20 18:07:08.995863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.184 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:08.996212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:08.996243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:08.996574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:08.996603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:08.996913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:08.996942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:08.997313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:08.997343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:08.997678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:08.997706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:08.998052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:08.998081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:08.998425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:08.998455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:08.998785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:08.998814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:08.999157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:08.999198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:08.999532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:08.999561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:08.999903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:08.999931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.000291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.000322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.000651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.000680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.001035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.001070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.001447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.001479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.001836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.001864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.002211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.002241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.002599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.002629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.002946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.002976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.003343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.003373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.003658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.003687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.004043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.004070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.004442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.004472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.004821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.004850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.005085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.005118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.005488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.005518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.005833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.005862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.006190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.006221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.006546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.006574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.006928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.006957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.007296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.007326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.007668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.007696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.008047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.008076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.008410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.008439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.008828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.008857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.009277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.009306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.009655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.009683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.185 [2024-11-20 18:07:09.010037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.185 [2024-11-20 18:07:09.010066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.185 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.010437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.010466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.010801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.010829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.011186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.011217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.011633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.011662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.011983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.012012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.012340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.012370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.012789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.012818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.013144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.013198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.013525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.013554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.013893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.013921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.014220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.014251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.014609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.014638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.014975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.015003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.015348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.015378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.015615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.015645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.015966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.015994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.016330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.016361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.016785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.016814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.017133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.017169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.017517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.017545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.017866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.017896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.018114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.018148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.018585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.018615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.018933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.018962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.019213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.019247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.019521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.019550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.019894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.019922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.020256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.020286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.020627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.020655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.021002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.021032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.021376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.021407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.021746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.021775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.186 qpair failed and we were unable to recover it. 00:40:09.186 [2024-11-20 18:07:09.022115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.186 [2024-11-20 18:07:09.022144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.022525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.022555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.022886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.022915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.023328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.023358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.023670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.023699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.024045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.024073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.024424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.024456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.024676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.024708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.025056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.025086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.025434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.025465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.025805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.025845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.026172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.026202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.026547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.026576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.026931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.026960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.027352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.027382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.027697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.027726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.028119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.028148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.028502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.028532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.028861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.028889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.029231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.029262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.029606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.029635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.029975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.030005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.030243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.030276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.030626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.030655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.030990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.031019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.031258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.031289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.031574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.031604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.031947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.031975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.032398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.032429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.032739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.032768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.033063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.033091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.033445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.033476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.033833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.033862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.034247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.034276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.034624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.034653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.035006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.035035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.035370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.035399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.035751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.187 [2024-11-20 18:07:09.035780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.187 qpair failed and we were unable to recover it. 00:40:09.187 [2024-11-20 18:07:09.036130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.036168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.036516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.036544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.036885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.036914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.037242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.037272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.037610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.037639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.037961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.037990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.038293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.038324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.038695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.038724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.039069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.039097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.039441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.039470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.039795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.039825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.040193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.040224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.040542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.040578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.040890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.040918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.041252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.041282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.041543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.041572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.041905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.041934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.042267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.042297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.042643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.042672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.042913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.042942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.043272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.043301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.043627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.043655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.043999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.044029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.044300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.044331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.044698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.044726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.045070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.045099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.045461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.045492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.045858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.045887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.046221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.046252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.046591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.046620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.046948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.046977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.047320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.047350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.047694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.047723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.048064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.048093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.048439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.048469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.048815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.048844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.049189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.049220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.188 [2024-11-20 18:07:09.049453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.188 [2024-11-20 18:07:09.049481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.188 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.049909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.049937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.050278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.050308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.050658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.050688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.051031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.051060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.051397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.051427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.051788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.051817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.052169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.052199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.052532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.052561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.052919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.052948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.053276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.053306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.053663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.053691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.054035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.054064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.054394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.054425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.054745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.054773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.055134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.055178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.055544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.055572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.055914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.055943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.056295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.056325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.056667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.056696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.057058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.057086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.057410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.057440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.057779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.057808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.058130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.058166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.058506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.058535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.058846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.058875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.059214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.059244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.059596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.059625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.059968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.059997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.060237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.060271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.060649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.060678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.061025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.061054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.061385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.061415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.063610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.063665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.064027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.064058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.189 [2024-11-20 18:07:09.064394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.189 [2024-11-20 18:07:09.064427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.189 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.064835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.064864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.065225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.065257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.065618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.065647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.065991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.066020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.066369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.066401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.066713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.066742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.066979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.067012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.067329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.067360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.067721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.067749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.068091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.068120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.068462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.068492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.068839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.068868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.069170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.069201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.069523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.069552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.069904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.069933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.070264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.070295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.070626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.070655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.071013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.071042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.071370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.071400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.071743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.071779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.072028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.072057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.072374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.072405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.072717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.072746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.073100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.073129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.073504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.073534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.472 [2024-11-20 18:07:09.073874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.472 [2024-11-20 18:07:09.073902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.472 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.074248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.074277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.074630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.074659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.075019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.075048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.075415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.075445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.075823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.075852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.076225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.076254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.076629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.076658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.076997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.077026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.077458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.077488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.077818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.077846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.078209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.078241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.078645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.078674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.079017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.079046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.079380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.079410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.079755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.079783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.080115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.080144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.080477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.080507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.080817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.080846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.081186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.081217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.081533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.081562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.081894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.081922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.082245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.082276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.082542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.082572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.082946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.082975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.083325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.083356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.083694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.083723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.084064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.084092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.084336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.084371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.084699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.084728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.084971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.085000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.085377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.085407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.085640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.085673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.085981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.086010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.086254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.086291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.086640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.086669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.087039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.087068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.087396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.087427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.087784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.087814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.088147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.088185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.088515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.088545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.088858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.088888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.089224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.089256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.089582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.089611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.089975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.090005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.090357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.090388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.090723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.090752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.091098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.091127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.091517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.473 [2024-11-20 18:07:09.091547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.473 qpair failed and we were unable to recover it. 00:40:09.473 [2024-11-20 18:07:09.091862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.091892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.092184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.092215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.092564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.092593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.092905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.092935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.093146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.093191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.093547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.093576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.093909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.093939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.094251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.094283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.094601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.094630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.094973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.095002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.095330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.095362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.095660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.095689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.096016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.096046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.096394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.096425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.096769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.096800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.097102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.097132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.097482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.097513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.097846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.097875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.098221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.098252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.098585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.098615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.098990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.099019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.099369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.099400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.099730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.099759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.100088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.100118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.100447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.100479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.100834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.100870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.101213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.101244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.101569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.101597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.101911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.101940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.102273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.102303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.102639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.102668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.102968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.102999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.103320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.103351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.103692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.103721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.104063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.104092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.104407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.104438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.104656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.104689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.105009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.105038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.105395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.105426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.105729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.105759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.105958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.105986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.106101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.106134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.106508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.106538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.106889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.106918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.107273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.107303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.107624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.107653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.107975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.474 [2024-11-20 18:07:09.108004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.474 qpair failed and we were unable to recover it. 00:40:09.474 [2024-11-20 18:07:09.108260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.108291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.108619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.108648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.108996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.109025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.109369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.109399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.109728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.109757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.109974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.110003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.110236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.110267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.110615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.110644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.111052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.111081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.111300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.111334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.111694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.111724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.112067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.112096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.112392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.112423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.112755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.112784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.113124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.113153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.113530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.113560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.113777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.113810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.113917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.113947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.114273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.114310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.114649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.114679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.115026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.115055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.115401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.115431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.115793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.115822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.116175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.116206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.116513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.116542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.116884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.116913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.117263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.117293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.117636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.117665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.117979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.118007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.118326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.118357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.118697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.118727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.119070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.119098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.119426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.119457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.119784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.119813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.475 [2024-11-20 18:07:09.120137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.475 [2024-11-20 18:07:09.120174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.475 qpair failed and we were unable to recover it. 00:40:09.476 [2024-11-20 18:07:09.120485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.476 [2024-11-20 18:07:09.120514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.476 qpair failed and we were unable to recover it. 00:40:09.476 [2024-11-20 18:07:09.120877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.476 [2024-11-20 18:07:09.120907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.476 qpair failed and we were unable to recover it. 00:40:09.476 [2024-11-20 18:07:09.121262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.476 [2024-11-20 18:07:09.121291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.476 qpair failed and we were unable to recover it. 00:40:09.476 [2024-11-20 18:07:09.121612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.476 [2024-11-20 18:07:09.121641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.476 qpair failed and we were unable to recover it. 00:40:09.476 [2024-11-20 18:07:09.121885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.476 [2024-11-20 18:07:09.121914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.476 qpair failed and we were unable to recover it. 00:40:09.476 [2024-11-20 18:07:09.122222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.476 [2024-11-20 18:07:09.122251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.476 qpair failed and we were unable to recover it. 00:40:09.476 [2024-11-20 18:07:09.122578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.476 [2024-11-20 18:07:09.122606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.476 qpair failed and we were unable to recover it. 00:40:09.476 [2024-11-20 18:07:09.122959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.476 [2024-11-20 18:07:09.122987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.476 qpair failed and we were unable to recover it. 00:40:09.476 [2024-11-20 18:07:09.123335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.476 [2024-11-20 18:07:09.123364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.476 qpair failed and we were unable to recover it. 00:40:09.476 [2024-11-20 18:07:09.123710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.476 [2024-11-20 18:07:09.123739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.476 qpair failed and we were unable to recover it. 00:40:09.476 [2024-11-20 18:07:09.124082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.476 [2024-11-20 18:07:09.124112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.476 qpair failed and we were unable to recover it. 00:40:09.476 [2024-11-20 18:07:09.124478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.476 [2024-11-20 18:07:09.124508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.476 qpair failed and we were unable to recover it. 00:40:09.476 [2024-11-20 18:07:09.124696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.476 [2024-11-20 18:07:09.124725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.476 qpair failed and we were unable to recover it. 00:40:09.476 [2024-11-20 18:07:09.125043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.476 [2024-11-20 18:07:09.125072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.476 qpair failed and we were unable to recover it. 00:40:09.476 [2024-11-20 18:07:09.125409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.476 [2024-11-20 18:07:09.125439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.476 qpair failed and we were unable to recover it. 00:40:09.476 [2024-11-20 18:07:09.125769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.476 [2024-11-20 18:07:09.125801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.476 qpair failed and we were unable to recover it. 00:40:09.476 [2024-11-20 18:07:09.126154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.476 [2024-11-20 18:07:09.126194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.126535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.126564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.126896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.126926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.127267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.127299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.127651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.127680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.128011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.128040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.128390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.128420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.128759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.128793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.129135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.129182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.129564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.129593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.129922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.129950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.130299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.130330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.130665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.130694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.131047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.131075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.131426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.131456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.131813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.131841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.132196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.132226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.132590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.132619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.132884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.132913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.133271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.133301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.133616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.133645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.133987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.134016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.134265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.134299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.134676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.134705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.135112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.135140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.135476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.135505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.135740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.135769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.136090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.136118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.136447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.136477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.136820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.136849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.137112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.137141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.137510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.137540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.137827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.137855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.138101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.138130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.138538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.138568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.138884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.138913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.139253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.139283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.139631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.139660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.140012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.140042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.140277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.140310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.140726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.477 [2024-11-20 18:07:09.140755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.477 qpair failed and we were unable to recover it. 00:40:09.477 [2024-11-20 18:07:09.140970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.140999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.141334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.141363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.141718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.141747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.142102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.142131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.142485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.142515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.142870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.142899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.143241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.143276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.143616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.143644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.143998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.144026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.144387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.144416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.144817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.144846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.145168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.145199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.145553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.145581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.145728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd231e0 is same with the state(6) to be set 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Write completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Write completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Write completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Write completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Write completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 Read completed with error (sct=0, sc=8) 00:40:09.478 starting I/O failed 00:40:09.478 [2024-11-20 18:07:09.146634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:09.478 [2024-11-20 18:07:09.147062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.147116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b14000b90 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.147644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.147673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.147970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.147986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.148397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.148450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.148665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.148691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.149003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.149019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.149407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.149423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.149717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.149732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.150022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.150037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.150373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.150389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.478 [2024-11-20 18:07:09.150687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.478 [2024-11-20 18:07:09.150702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.478 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.151037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.151052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.151357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.151372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.151645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.151661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.151978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.151994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.152313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.152329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.152613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.152628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.152958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.152973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.153259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.153275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.153585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.153600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.153793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.153808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.154141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.154156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.154456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.154471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.154750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.154766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.154964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.154978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.155316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.155331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.155659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.155673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.156007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.156022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.156321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.156337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.156689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.156704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.156993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.157007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.157319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.157335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.157622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.157636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.157941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.157955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.158231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.158246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.158563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.158577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.158915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.158930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.159262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.159278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.159591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.159606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.159923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.159938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.160148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.160179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.160477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.160494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.160779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.160794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.161078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.161092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.161279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.161296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.161596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.161611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.161931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.161946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.162280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.479 [2024-11-20 18:07:09.162296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.479 qpair failed and we were unable to recover it. 00:40:09.479 [2024-11-20 18:07:09.162615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.162630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.162955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.162970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.163287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.163302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.163619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.163636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.163921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.163936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.164303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.164318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.164642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.164657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.165028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.165043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.165401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.165417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.165703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.165718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.166049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.166064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.166371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.166386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.166713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.166728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.167060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.167074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.167443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.167458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.167787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.167803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.168130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.168145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.168506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.168521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.168802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.168816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.169148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.169174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.169511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.169526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.169714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.169728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.170043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.170058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.170379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.170394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.170714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.170729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.171013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.171028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.171367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.171382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.171666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.171681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.171985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.172000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.172315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.172331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.172650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.172665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.172952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.172967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.173290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.173306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.173626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.173641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.173967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.173982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.174179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.174195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.174507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.174521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.174887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.174902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.175188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.175204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.175509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.175524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.175847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.175861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.176194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.176209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.176499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.176513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.480 [2024-11-20 18:07:09.176876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.480 [2024-11-20 18:07:09.176891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.480 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.177176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.177193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.177474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.177489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.177772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.177786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.178071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.178086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.178381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.178396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.178689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.178703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.179016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.179031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.179231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.179247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.179597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.179612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.179795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.179812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.180112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.180127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.180443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.180458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.180790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.180804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.181073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.181088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.181428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.181443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.181753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.181769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.182059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.182079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.182363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.182379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.182698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.182712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.183045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.183060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.183397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.183413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.183750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.183765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.184136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.184150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.481 qpair failed and we were unable to recover it. 00:40:09.481 [2024-11-20 18:07:09.184517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.481 [2024-11-20 18:07:09.184532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.184857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.184872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.185202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.185218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.185529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.185543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.185826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.185841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.186169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.186184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.186518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.186532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.186897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.186912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.187241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.187256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.187627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.187642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.187929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.187944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.188215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.188231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.188565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.188580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.188869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.188883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.189172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.189187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.189406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.189420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.189738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.189752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.190091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.190105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.190422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.190437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.190720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.190736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.191066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.191086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.191371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.191386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.191714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.191728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.192061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.192075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.192299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.192315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.192660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.192675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.192857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.192872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.193194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.193209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.193549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.193563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.193850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.193864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.194167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.194183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.194469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.194483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.194793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.194807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.195139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.195154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.195460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.195476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.195797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.195812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.196094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.196110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.196436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.196451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.196777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.196792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.197122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.197136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.197433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.482 [2024-11-20 18:07:09.197449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.482 qpair failed and we were unable to recover it. 00:40:09.482 [2024-11-20 18:07:09.197735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.197750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.198043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.198058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.198379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.198395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.198713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.198728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.199053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.199068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.199348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.199363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.199566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.199583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.199904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.199919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.200246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.200262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.200458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.200474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.200767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.200781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.201098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.201113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.201332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.201348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.201641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.201656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.201977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.201992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.202323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.202338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.202634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.202648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.203005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.203019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.203320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.203335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.203629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.203643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.203969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.203987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.204304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.204319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.204600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.204615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.204947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.204961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.205293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.205308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.205612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.205626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.205954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.205970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.206290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.206306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.206630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.206644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.206930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.206944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.207172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.207188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.207479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.207494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.207692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.207709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.207933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.207947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.208259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.208275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.208611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.208626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.208943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.208958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.209280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.209295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.209617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.209632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.209951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.209966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.210264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.210279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.210466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.210480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.210827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.483 [2024-11-20 18:07:09.210842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.483 qpair failed and we were unable to recover it. 00:40:09.483 [2024-11-20 18:07:09.211172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.211187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.211364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.211379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.211586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.211601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.211883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.211897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.212172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.212187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.212526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.212541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.212826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.212842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.213168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.213183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.213497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.213511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.213835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.213849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.214184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.214199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.214512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.214526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.214813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.214827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.215173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.215189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.215513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.215528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.215859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.215873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.216209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.216225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.216541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.216555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.216877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.216892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.217205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.217221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.217490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.217504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.217811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.217826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.218166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.218182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.218472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.218486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.218820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.218834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.219167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.219182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.219505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.219520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.219805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.219819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.220144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.220163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.220481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.220495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.220866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.220881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.221181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.221197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.221509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.221524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.221717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.221731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.222019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.222033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.222306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.222321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.222655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.222669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.222964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.222978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.223288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.223303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.223614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.223628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.223964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.223978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.224265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.224281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.224550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.224565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.484 [2024-11-20 18:07:09.224846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.484 [2024-11-20 18:07:09.224862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.484 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.225144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.225164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.225486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.225505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.225708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.225723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.226012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.226027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.226323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.226338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.226669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.226684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.227055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.227069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.227402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.227417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.227748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.227762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.228094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.228108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.228441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.228455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.228790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.228805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.229136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.229151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.229468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.229483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.229811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.229826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.230149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.230173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.230375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.230391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.230671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.230685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.230998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.231014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.231329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.231345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.231680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.231694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.232017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.232032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.232315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.232330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.232625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.232639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.233009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.233023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.233307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.233322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.233640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.233655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.233971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.233986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.234275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.234291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.234599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.234613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.234946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.234962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.485 qpair failed and we were unable to recover it. 00:40:09.485 [2024-11-20 18:07:09.235283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.485 [2024-11-20 18:07:09.235299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.235571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.235585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.235870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.235885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.236061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.236077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.236427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.236442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.236806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.236820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.237153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.237172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.237554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.237568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.237853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.237868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.238188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.238204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.238485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.238500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.238827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.238844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.239110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.239124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.239436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.239451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.239785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.239799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.240127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.240142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.240476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.240491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.240804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.240818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.241022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.241037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.241313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.241327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.241644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.241658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.241935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.241950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.242253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.242268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.242623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.242638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.242961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.242976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.243307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.243323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.243632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.243647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.243980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.243995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.244216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.244231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.244565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.244579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.244864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.244880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.245205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.245220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.245551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.245566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.245879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.245893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.246092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.246107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.246327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.246341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.246627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.246641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.247008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.247023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.486 [2024-11-20 18:07:09.247323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.486 [2024-11-20 18:07:09.247341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.486 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.247673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.247687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.247970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.247985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.248210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.248226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.248510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.248524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.248852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.248868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.249190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.249205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.249485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.249500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.249785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.249799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.250024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.250039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.250252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.250267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.250594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.250609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.250921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.250936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.251255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.251270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.251607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.251624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.251908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.251922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.252252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.252267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.252560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.252575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.252898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.252913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.253237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.253252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.253581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.253596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.253904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.253918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.254208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.254223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.254546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.254561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.254926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.254940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.255265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.255281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.255608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.255622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.255935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.255951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.256268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.256283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.256462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.256479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.256705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.256719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.257035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.257049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.257380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.257395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.257755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.257770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.258132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.258147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.258368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.258383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.258696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.258711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.259008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.259022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.259331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.259347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.259713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.259728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.260052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.260067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.260398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.260417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.260702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.260716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.487 [2024-11-20 18:07:09.261043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.487 [2024-11-20 18:07:09.261058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.487 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.261369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.261384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.261751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.261766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.262091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.262106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.262410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.262425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.262712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.262727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.263093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.263107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.263310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.263326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.263414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.263431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.263698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.263712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.263985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.264000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.264196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.264213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.264537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.264551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.264875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.264890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.265204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.265219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.265544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.265559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.265876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.265890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.266177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.266193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.266517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.266531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.266855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.266870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.267200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.267216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.267532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.267546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.267830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.267845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.268174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.268189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.268525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.268539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.268849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.268866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.269149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.269169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.269343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.269359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.269688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.269703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.270015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.270030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.270367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.270382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.270705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.270720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.271028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.271042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.271322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.271338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.271622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.271637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.271961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.271976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.272311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.272326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.272661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.272676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.273002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.273016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.273323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.273338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.273649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.273664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.273984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.273999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.274293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.274309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.274599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.274614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.488 qpair failed and we were unable to recover it. 00:40:09.488 [2024-11-20 18:07:09.274932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.488 [2024-11-20 18:07:09.274946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.275238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.275253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.275487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.275502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.275813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.275828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.276113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.276127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.276456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.276471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.276807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.276822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.277010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.277024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.277343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.277358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.277673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.277689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.277977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.277991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.278318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.278333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.278644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.278659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.278940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.278955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.279281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.279297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.279634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.279648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.279979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.279994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.280320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.280336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.280660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.280675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.281022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.281037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.281385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.281399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.281683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.281697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.282034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.282051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.282376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.282391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.282685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.282699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.283010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.283025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.283207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.283223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.283614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.283629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.283950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.283965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.284180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.284195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.284493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.284507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.284726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.284742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.285025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.285040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.285248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.285263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.285637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.285652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.285977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.285992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.286322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.286337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.286603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.286617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.286938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.286953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.287243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.287259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.287579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.287593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.287879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.287893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.288204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.288220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.288442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.288457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.288770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.288784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.289074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.289088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.489 [2024-11-20 18:07:09.289417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.489 [2024-11-20 18:07:09.289432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.489 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.289762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.289778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.289965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.289979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.290274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.290292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.290634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.290648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.290971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.290986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.291323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.291338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.291655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.291669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.291993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.292007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.292327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.292343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.292679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.292694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.292976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.292991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.293323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.293338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.293677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.293692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.294007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.294022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.294316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.294331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.294624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.294638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.294981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.294996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.295325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.295341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.295670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.295684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.295885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.295901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.296118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.296133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.296473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.296488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.296769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.296784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.297109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.297124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.297448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.297464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.297639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.297655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.298024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.298039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.298316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.298331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.298646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.298660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.298945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.298960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.299277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.299293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.299617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.299631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.299954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.299969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.300302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.300318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.300593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.300608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.300919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.300934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.301219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.490 [2024-11-20 18:07:09.301234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.490 qpair failed and we were unable to recover it. 00:40:09.490 [2024-11-20 18:07:09.301530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.301545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.301873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.301887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.302200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.302215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.302545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.302560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.302892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.302906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.303230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.303245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.303577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.303595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.303928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.303942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.304214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.304229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.304548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.304562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.304895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.304909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.305243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.305258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.305433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.305450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.305809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.305823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.306109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.306123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.306442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.306457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.306735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.306749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.307078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.307093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.307376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.307391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.307721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.307736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.308068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.308083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.308380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.308395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.308770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.308785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.309068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.309082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.309381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.309396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.309713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.309727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.310011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.310026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.310328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.310344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.310680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.310694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.311024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.311038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.311227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.311242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.311521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.311535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.311858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.311873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.312144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.312164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.312477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.312492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.312808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.312822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.313115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.313130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.313465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.313481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.313774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.313788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.314076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.314091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.314417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.314432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.314756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.491 [2024-11-20 18:07:09.314770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.491 qpair failed and we were unable to recover it. 00:40:09.491 [2024-11-20 18:07:09.315060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.315075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.315389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.315405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.315727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.315741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.316066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.316081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.316393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.316409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.316784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.316799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.317089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.317103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.317392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.317407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.317690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.317704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.318031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.318046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.318447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.318463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.318734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.318750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.319071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.319086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.319378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.319394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.319680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.319695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.319883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.319898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.320216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.320232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2943458 Killed "${NVMF_APP[@]}" "$@" 00:40:09.492 [2024-11-20 18:07:09.320565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.320579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.320868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.320886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 18:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:40:09.492 [2024-11-20 18:07:09.321233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.321248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.321516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.321531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 18:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:40:09.492 [2024-11-20 18:07:09.321854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.321869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 18:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.322064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.322081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 18:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:09.492 18:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:09.492 [2024-11-20 18:07:09.322425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.322441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.322765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.322780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.323107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.323121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.323447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.323462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.323792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.323807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.324119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.324134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.324501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.324516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.324856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.324870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.325211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.325226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.325509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.325523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.325736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.325751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.326069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.326083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.326392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.326407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.326742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.326756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.326970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.326984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.327323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.327338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.327709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.327724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.328052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.328067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.328297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.492 [2024-11-20 18:07:09.328313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.492 qpair failed and we were unable to recover it. 00:40:09.492 [2024-11-20 18:07:09.328585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.328600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.328930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.328948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.329263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.329279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.329483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.329499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.329822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.329837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 18:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=2944405 00:40:09.493 [2024-11-20 18:07:09.330115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.330131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 18:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 2944405 00:40:09.493 [2024-11-20 18:07:09.330430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.330446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 18:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:40:09.493 18:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2944405 ']' 00:40:09.493 [2024-11-20 18:07:09.330774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.330789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 18:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:09.493 [2024-11-20 18:07:09.331105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.331121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 18:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:09.493 [2024-11-20 18:07:09.331394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.331411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 18:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:09.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:09.493 [2024-11-20 18:07:09.331729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.331744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with 18:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:09.493 addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 18:07:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:09.493 [2024-11-20 18:07:09.332063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.332079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.332400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.332416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.332743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.332760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.332984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.333000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.333328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.333343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.333442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.333459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.333731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.333747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.334070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.334086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.334385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.334401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.334705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.334720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.335046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.335061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.335370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.335386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.335711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.335727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.336057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.336072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.336292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.336307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.336669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.336684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.337006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.337021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.337325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.337341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.337673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.337688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.337873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.337888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.338094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.338109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.338397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.338413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.338627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.338643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.338946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.338962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.339301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.339316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.339604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.339619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.339947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.339965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.340302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.340317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.340620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.340635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.340931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.340946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.341225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.341241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.493 qpair failed and we were unable to recover it. 00:40:09.493 [2024-11-20 18:07:09.341527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.493 [2024-11-20 18:07:09.341543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.341890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.341906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.342177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.342193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.342376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.342393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.342620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.342635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.342834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.342848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.343150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.343170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.343551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.343566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.343789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.343804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.344130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.344146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.344516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.344531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.344750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.344765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.345086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.345101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.345384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.345400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.345766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.345781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.346113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.346128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.346458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.346473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.346789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.346804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.347009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.347023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.347316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.347331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.347658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.347673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.347867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.347883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.348205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.348224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.348619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.348634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.348956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.348970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.349268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.349285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.349572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.349587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.349918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.349932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.350270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.350286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.350599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.350614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.350904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.350918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.351127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.351142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.351330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.351345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.351679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.351694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.352008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.352023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.494 [2024-11-20 18:07:09.352319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.494 [2024-11-20 18:07:09.352334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.494 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.352656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.352672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.352950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.352965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.353257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.353273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.353527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.353543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.353717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.353732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.353914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.353931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.354289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.354304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.354589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.354604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.354808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.354823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.355150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.355169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.355494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.355509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.355833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.355847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.356149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.356174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.356392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.356407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.356697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.356712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.357008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.357023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.357321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.357336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.357513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.357528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.357867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.357882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.358184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.358200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.358517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.358531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.358852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.358867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.359091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.359105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.359427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.359442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.359778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.359792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.360030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.360045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.360282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.360297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.360582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.360599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.360884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.360899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.361269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.361284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.361615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.361630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.361943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.361958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.362289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.362304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.362742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.362756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.363046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.363061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.363136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.363150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.363350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.363364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.363642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.363657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.363991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.364007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.364318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.364333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.364653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.364667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.365005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.365020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.365326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.365342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.365640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.365655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.365978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.365993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.366184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.366200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.366489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.366504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.366788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.366803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.367132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.367147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.367462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.367477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.367771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.367785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.368121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.368136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.368457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.368472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.368751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.368766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.368934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.368952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.369280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.369295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.369623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.369637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.495 [2024-11-20 18:07:09.369990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.495 [2024-11-20 18:07:09.370005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.495 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.370318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.370334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.370526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.370540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.370842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.370857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.371164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.371180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.371395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.371409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.371706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.371720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.372060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.372075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.372470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.372485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.372667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.372681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.373066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.373081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.373417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.373433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.373769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.373783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.374113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.374128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.374462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.374478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.374701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.374716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.375006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.375021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.375097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.375114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.375469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.375485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.375779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.375794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.376176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.376191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.376579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.376593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.376887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.376902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.377098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.377118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.770 [2024-11-20 18:07:09.377224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.770 [2024-11-20 18:07:09.377240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.770 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.377529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.377543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.377919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.377934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.378137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.378151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.378531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.378546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.378749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.378763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.379138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.379152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.379497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.379512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.379853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.379868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.380063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.380078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.380439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.380454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.380747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.380762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.380964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.380980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.381321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.381336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.381650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.381668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.382002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.382017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.382320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.382335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.382531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.382545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.382810] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:40:09.771 [2024-11-20 18:07:09.382855] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:09.771 [2024-11-20 18:07:09.382863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.382878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.383068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.383082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.383331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.383346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.383679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.383694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.384000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.384016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.384329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.384345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.384546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.384561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.384740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.384755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.385107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.385122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.385457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.385473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.385809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.385825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.386174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.386190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.386390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.386405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.771 qpair failed and we were unable to recover it. 00:40:09.771 [2024-11-20 18:07:09.386736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.771 [2024-11-20 18:07:09.386752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.387083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.387099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.387303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.387319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.387670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.387685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.388024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.388040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.388324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.388340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.388552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.388569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.388958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.388973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.389171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.389187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.389389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.389407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.389735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.389751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.389936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.389951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.390278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.390294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.390575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.390590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.390916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.390932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.391219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.391235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.391448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.391463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.391746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.391762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.391945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.391961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.392304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.392319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.392647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.392662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.392945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.392960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.393298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.393314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.393525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.393540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.393883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.393898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.394181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.394197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.394494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.394509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.394724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.394739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.394933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.394948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.395144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.395163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.395495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.395511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.395847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.395863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.772 [2024-11-20 18:07:09.396063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.772 [2024-11-20 18:07:09.396078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.772 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.396372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.396388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.396723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.396737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.397042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.397056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.397353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.397375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.397704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.397718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.398003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.398018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.398198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.398214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.398399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.398413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.398715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.398729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.399086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.399101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.399411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.399426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.399761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.399775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.400113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.400128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.400336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.400351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.400715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.400730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.401108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.401123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.401311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.401326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.401519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.401534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.401851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.401866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.402068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.402083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Write completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Write completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 Read completed with error (sct=0, sc=8) 00:40:09.773 starting I/O failed 00:40:09.773 [2024-11-20 18:07:09.402838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:09.773 [2024-11-20 18:07:09.403394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.403501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b08000b90 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.403913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.403950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b08000b90 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.404445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.404497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.404841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.404859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.773 [2024-11-20 18:07:09.405213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.773 [2024-11-20 18:07:09.405230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.773 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.405538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.405553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.405892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.405907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.406108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.406122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.406446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.406461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.406670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.406685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.406882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.406898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.407224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.407240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.407582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.407597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.407897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.407911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.408250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.408265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.408593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.408607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.408953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.408969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.409188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.409208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.409421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.409439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.409810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.409826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.410107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.410122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.410432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.410447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.410674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.410690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.410746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.410761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.411043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.411058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.411389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.411405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.411727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.411743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.412028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.412043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.412325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.412340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.412639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.412653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.412994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.413009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.413327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.413343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.413674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.413689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.413986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.414001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.414354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.414369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.414697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.414712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.415049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.415064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.415366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.415381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.415709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.415723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.416049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.416064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.416369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.416384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.416711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.416725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.774 qpair failed and we were unable to recover it. 00:40:09.774 [2024-11-20 18:07:09.417051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.774 [2024-11-20 18:07:09.417066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.417380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.417395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.417699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.417717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.418069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.418085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.418395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.418411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.418741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.418756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.419040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.419055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.419251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.419266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.419616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.419631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.419965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.419981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.420305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.420321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.420623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.420638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.420827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.420854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.421031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.421047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.421275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.421291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.421622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.421637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.421823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.421842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.422169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.422184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.422538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.422553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.422880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.422896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.423187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.423203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.423385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.423401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.423604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.423619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.423980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.423998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.424291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.424306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.424610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.424626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.424911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.424927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.425226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.425243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.425566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.425582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.425906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.425921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.426108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.426123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.426436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.426452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.426637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.775 [2024-11-20 18:07:09.426653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.775 qpair failed and we were unable to recover it. 00:40:09.775 [2024-11-20 18:07:09.426970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.426985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.427304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.427319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.427648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.427663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.427984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.427999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.428293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.428309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.428506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.428521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.428847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.428862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.429145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.429178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.429412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.429426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.429621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.429638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.429931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.429949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.430285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.430301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.430519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.430536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.430869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.430884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.431169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.431185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.431396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.431410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.431730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.431744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.432070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.432085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.432301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.432317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.432509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.432524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.432845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.432860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.433185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.433200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.433505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.433519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.433724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.433746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.434110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.776 [2024-11-20 18:07:09.434126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.776 qpair failed and we were unable to recover it. 00:40:09.776 [2024-11-20 18:07:09.434418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.434434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.434786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.434801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.435124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.435138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.435487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.435503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.435834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.435850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.436175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.436191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.436504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.436519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.436846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.436861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.437193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.437209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.437467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.437482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.437810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.437824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.438007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.438022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.438327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.438343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.438579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.438594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.438879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.438895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.439187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.439202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.439501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.439515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.439807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.439822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.440139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.440154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.440488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.440504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.440716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.440731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.441045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.441060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.441393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.441409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.441697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.441713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.777 qpair failed and we were unable to recover it. 00:40:09.777 [2024-11-20 18:07:09.441896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.777 [2024-11-20 18:07:09.441912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.442237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.442253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.442574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.442593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.442891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.442906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.443224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.443240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.443423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.443438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.443689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.443704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.444031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.444045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.444369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.444385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.444728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.444743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.445067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.445082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.445382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.445397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.445618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.445633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.445955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.445970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.446272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.446287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.446590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.446605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.446932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.446947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.447271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.447288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.447624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.447639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.447840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.447855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.448189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.448205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.448517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.448532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.448853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.448868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.449190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.449206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.449395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.778 [2024-11-20 18:07:09.449410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.778 qpair failed and we were unable to recover it. 00:40:09.778 [2024-11-20 18:07:09.449742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.449757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.450078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.450092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.450480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.450497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.450775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.450789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.451111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.451129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.451330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.451345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.451711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.451726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.452027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.452042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.452373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.452389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.452591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.452607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.452815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.452830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.453119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.453133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.453444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.453459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.453775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.453790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.454058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.454073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.454402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.454417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.454706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.454721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.454927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.454942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.455129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.455143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.455469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.455485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.455690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.455706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.456040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.456055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.456396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.456412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.779 [2024-11-20 18:07:09.456690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.779 [2024-11-20 18:07:09.456704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.779 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.457023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.457038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.457360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.457375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.457704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.457718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.457932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.457946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.458226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.458252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.458551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.458566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.458874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.458888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.459091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.459105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.459501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.459516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.459794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.459809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.460096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.460111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.460432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.460447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.460774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.460790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.461116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.461132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.461463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.461479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.461667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.461682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.461979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.461994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.462327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.462343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.462676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.462691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.462970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.462984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.463292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.463307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.463627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.463645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.463844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.463859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.464209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.464224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.780 qpair failed and we were unable to recover it. 00:40:09.780 [2024-11-20 18:07:09.464545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.780 [2024-11-20 18:07:09.464560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.464884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.464899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.465089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.465107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.465306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.465321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.465600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.465617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.465743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.465758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.466107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.466121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.466293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.466308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.466667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.466682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.466898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.466914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.467231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.467245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.467531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.467547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.467877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.467892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.468071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.468086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.468399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.468415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.468739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.468753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.468953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.468969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.469167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.469183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.469513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.469528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.469862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.469877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.470115] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:09.781 [2024-11-20 18:07:09.470176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.470192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.470497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.470512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.470849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.470863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.471044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.471060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.471405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.781 [2024-11-20 18:07:09.471424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.781 qpair failed and we were unable to recover it. 00:40:09.781 [2024-11-20 18:07:09.471715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.471730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.471965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.471980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.472295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.472311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.472493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.472515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.472837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.472852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.473140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.473154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.473374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.473394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.473745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.473760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.474047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.474061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.474356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.474372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.474704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.474718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.475006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.475021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.475319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.475334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.475626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.475641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.475968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.475983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.476168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.476184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.476520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.476535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.476859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.476873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.477206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.477221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.477426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.477441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.477762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.477778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.477972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.477987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.478329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.478346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.478418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.478433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.478766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.478782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.782 [2024-11-20 18:07:09.479112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.782 [2024-11-20 18:07:09.479128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.782 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.479450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.479469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.479796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.479812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.480145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.480165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.480455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.480470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.480790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.480806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.481099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.481115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.481442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.481458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.481634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.481649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.481845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.481860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.482075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.482091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.482303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.482318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.482604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.482620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.482735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.482750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.483236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.483333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.483644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.483683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.483972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.484003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.484363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.484395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.484768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.484798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.485006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.485036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.485233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.485251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.485434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.485449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.485743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.485760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.485975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.485990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.486331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.486347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.486656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.486671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.486995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.487010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.487290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.487305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.487635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.487654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.487859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.487874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.488122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.488137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.488451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.488467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.488829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.488844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.489179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.489197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.489549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.489565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.489913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.783 [2024-11-20 18:07:09.489928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.783 qpair failed and we were unable to recover it. 00:40:09.783 [2024-11-20 18:07:09.490245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.490260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.490558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.490572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.490879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.490894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.491216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.491231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.491542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.491557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.491911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.491926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.492269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.492285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.492614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.492630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.492959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.492974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.493307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.493323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.493648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.493664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.494028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.494043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.494346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.494362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.494669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.494684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.494890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.494905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.495208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.495224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.495530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.495546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.495751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.495765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.496140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.496155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.496448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.496464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.496806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.496822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.497151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.497174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.497472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.497488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.497763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.497778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.498094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.498110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.498447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.498464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.498771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.498787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.499112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.499128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.499458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.499474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.499754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.499769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.500049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.500063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.500443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.500458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.500769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.500784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.501058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.501075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.501502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.501517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.501839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.501854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.502179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.784 [2024-11-20 18:07:09.502195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.784 qpair failed and we were unable to recover it. 00:40:09.784 [2024-11-20 18:07:09.502487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.502502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.502861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.502875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.502908] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:09.785 [2024-11-20 18:07:09.502937] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:09.785 [2024-11-20 18:07:09.502945] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:09.785 [2024-11-20 18:07:09.502952] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:09.785 [2024-11-20 18:07:09.502957] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:09.785 [2024-11-20 18:07:09.503095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.503109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.503097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:40:09.785 [2024-11-20 18:07:09.503228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:40:09.785 [2024-11-20 18:07:09.503370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:40:09.785 [2024-11-20 18:07:09.503370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:40:09.785 [2024-11-20 18:07:09.503575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.503590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.503909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.503923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.504248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.504268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.504587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.504604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.504890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.504905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.505185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.505200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.505514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.505528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.505846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.505861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.506064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.506079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.506408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.506424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.506754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.506769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.507116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.507131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.507439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.507455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.507759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.507773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.507964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.507980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.508246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.508261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.508587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.508601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.508804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.508822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.509134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.509149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.509487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.509502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.509816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.509830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.510052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.510069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.510394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.510410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.510644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.510659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.510983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.510998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.511279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.511294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.511504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.511520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.511719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.511733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.511940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.511957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.512246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.512261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.512431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.512445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.785 [2024-11-20 18:07:09.512566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.785 [2024-11-20 18:07:09.512584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.785 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.512908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.512922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.513223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.513240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.513521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.513535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.513726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.513741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.513968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.513983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.514290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.514305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.514498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.514512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.514858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.514873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.515187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.515202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.515523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.515538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.515897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.515912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.516098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.516114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.516419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.516438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.516760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.516774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.517097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.517112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.517305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.517320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.517632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.517647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.517929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.517944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.518230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.518246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.518573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.518588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.518906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.518921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.519304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.519320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.519495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.519510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.519874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.519889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.520090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.520105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.520342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.520359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.520711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.520726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.520927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.520943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.521266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.521281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.521577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.521592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.521916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.521931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.522132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.522146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.522425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.522442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.522759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.522774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.523071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.523085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.523422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.523438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.523639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.523656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.786 qpair failed and we were unable to recover it. 00:40:09.786 [2024-11-20 18:07:09.523851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.786 [2024-11-20 18:07:09.523868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.524073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.524088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.524383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.524399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.524589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.524605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.524959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.524975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.525247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.525263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.525542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.525558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.525839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.525853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.526193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.526209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.526488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.526504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.526843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.526859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.526954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.526969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.527254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.527270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.527586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.527602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.527805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.527822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.528092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.528107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.528294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.528313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.528625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.528640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.528837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.528852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.529069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.529084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.529346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.529363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.529647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.529663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.530003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.530019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.530335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.530351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.530670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.530684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.531036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.531051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.531381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.531396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.531678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.531692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.531966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.531980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.532277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.532293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.532580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.532594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.532903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.532917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.533235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.533250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.787 [2024-11-20 18:07:09.533555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.787 [2024-11-20 18:07:09.533569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.787 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.533754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.533768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.534096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.534110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.534436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.534451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.534725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.534739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.535057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.535072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.535425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.535440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.535510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.535524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.535808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.535824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.536150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.536171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.536241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.536257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.536583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.536597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.536911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.536925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.537201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.537216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.537493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.537508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.537812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.537827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.538102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.538117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.538291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.538307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.538493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.538508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.538725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.538740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.539054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.539069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.539360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.539376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.539700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.539715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.539893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.539908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.540212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.540229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.540532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.540547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.540936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.540950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.541135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.541151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.541254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.541268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.541583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.541597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.541882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.541897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.542202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.542217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.542491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.542505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.542689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.542704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.542971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.542986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.543299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.543314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.543476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.543491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.788 [2024-11-20 18:07:09.543986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.788 [2024-11-20 18:07:09.544087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.788 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.544692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.544784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.545193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.545233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.545551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.545567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.545793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.545807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.546100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.546116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.546332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.546349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.546666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.546681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.547002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.547017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.547281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.547296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.547615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.547630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.547911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.547926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.548206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.548221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.548522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.548537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.548867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.548885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.549167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.549182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.549416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.549432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.553549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.553601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.553907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.553925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.554391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.554443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.554829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.554848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.555019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.555034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.555329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.555345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.555654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.555669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.555852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.555867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.556185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.556200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.556399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.556414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.556601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.556615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.556923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.556940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.557173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.557190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.557495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.557511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.557729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.557744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.557911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.557925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.558220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.558236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.558566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.558580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.558896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.558911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.559223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.559239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.559530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.789 [2024-11-20 18:07:09.559545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.789 qpair failed and we were unable to recover it. 00:40:09.789 [2024-11-20 18:07:09.559822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.559837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.560152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.560171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.560496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.560511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.560628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.560643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.561040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.561123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b08000b90 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.561364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.561399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b08000b90 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.561712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.561744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b08000b90 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.562047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.562076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b08000b90 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.562301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.562332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b08000b90 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.562643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.562661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.562878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.562893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.563226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.563242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.563565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.563579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.563775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.563792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.563961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.563976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.564221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.564236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.564537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.564553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.564735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.564750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.564928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.564944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.565275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.565291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.565614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.565629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.565823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.565838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.566173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.566188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.566383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.566398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.566628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.566644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.566969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.566984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.567308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.567323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.567540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.567555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.567829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.567844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.568124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.568139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.568461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.568478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.568663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.568679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.569008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.569023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.569210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.569227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.569548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.569563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.569891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.569906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.570241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.570257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.570586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.790 [2024-11-20 18:07:09.570602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.790 qpair failed and we were unable to recover it. 00:40:09.790 [2024-11-20 18:07:09.570925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.570940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.571218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.571234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.571511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.571527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.571857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.571872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.571944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.571961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.572269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.572285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.572612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.572630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.572800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.572816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.572995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.573013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.573182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.573197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.573377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.573392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.573676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.573691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.573876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.573891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.574228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.574244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.574563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.574578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.574899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.574915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.575087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.575103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.575372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.575388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.575570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.575585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.575911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.575926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.576214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.576230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.576507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.576522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.576810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.576826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.577008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.577025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.577315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.577331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.577636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.577651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.577983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.577998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.578277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.578293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.578622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.578637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.578852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.578867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.579037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.579053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.579352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.579368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.579634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.579649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.791 [2024-11-20 18:07:09.579970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.791 [2024-11-20 18:07:09.579988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.791 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.580156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.580176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.580364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.580380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.580720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.580735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.580945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.580961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.581225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.581241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.581537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.581552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.581872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.581886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.582223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.582238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.582415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.582430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.582607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.582623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.582954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.582968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.583263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.583278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.583462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.583478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.583782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.583797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.584116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.584131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.584457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.584473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.584787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.584801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.585084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.585098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.585424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.585440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.585724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.585738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.586088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.586103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.586430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.586446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.586736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.586752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.587078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.587092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.587391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.587407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.587700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.587714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.587889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.587903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.588284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.588299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.588482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.588505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.588830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.588845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.589038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.589053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.589222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.589238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.589541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.589555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.589841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.589855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.590154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.590174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.590490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.590505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.590831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.792 [2024-11-20 18:07:09.590845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.792 qpair failed and we were unable to recover it. 00:40:09.792 [2024-11-20 18:07:09.591024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.591040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.591247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.591262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.591573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.591588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.591786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.591807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.592169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.592185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.592471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.592485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.592762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.592776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.593069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.593083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.593512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.593527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.593823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.593837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.594137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.594151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.594487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.594501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.594818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.594833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.595041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.595056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.595399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.595415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.595621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.595635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.595934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.595949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.596227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.596242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.596409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.596423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.596660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.596674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.596985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.596999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.597323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.597339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.597657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.597672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.598003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.598018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.598182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.598197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.598506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.598521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.598838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.598852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.599170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.599185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.599515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.599529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.599807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.599821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.600000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.600025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.600218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.600235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.600560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.600575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.600885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.600900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.601181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.601197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.601374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.601390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.601679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.601694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.601952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.793 [2024-11-20 18:07:09.601967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.793 qpair failed and we were unable to recover it. 00:40:09.793 [2024-11-20 18:07:09.602144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.602165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.602489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.602503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.602812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.602826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.603118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.603133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.603434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.603449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.603729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.603743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.604065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.604080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.604397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.604412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.604580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.604596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.604801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.604816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.605163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.605178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.605488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.605503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.605738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.605753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.606071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.606085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.606386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.606401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.606730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.606744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.607029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.607044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.607367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.607383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.607703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.607717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.607901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.607915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.608224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.608240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.608569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.608583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.608860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.608875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.609063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.609077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.609385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.609400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.609589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.609605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.609850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.609864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.610179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.610195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.610401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.610423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.610761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.610775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.611086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.611101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.611267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.611282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.611573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.611588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.611775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.611801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.612124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.612138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.612523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.612539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.612864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.612879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.794 [2024-11-20 18:07:09.613086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.794 [2024-11-20 18:07:09.613100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.794 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.613437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.613452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.613620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.613636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.613948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.613962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.614285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.614300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.614480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.614494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.614889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.614903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.615188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.615202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.615535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.615549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.615827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.615841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.616165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.616180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.616396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.616411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.616618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.616632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.616816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.616830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.617120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.617135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.617433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.617448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.617674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.617688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.617973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.617988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.618290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.618305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.618584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.618598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.618784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.618798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.619176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.619192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.619520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.619535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.619850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.619864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.620190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.620205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.620543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.620557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.620888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.620902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.621080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.621095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.621525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.621617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b14000b90 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.622017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.622056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b14000b90 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.622388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.622422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b14000b90 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.622632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.622661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b14000b90 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.622995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.623023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b14000b90 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.623283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.623301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.623362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.623376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.623582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.623596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.623900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.623914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.624010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.624025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.795 [2024-11-20 18:07:09.624120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.795 [2024-11-20 18:07:09.624134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.795 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.624456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.624471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.624759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.624773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.624967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.624981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.625170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.625187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.625532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.625546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.625822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.625836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.626039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.626055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.626333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.626349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.626548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.626562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.626843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.626858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.627135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.627150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.627366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.627381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.627758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.627772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.627950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.627965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.628290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.628305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.628643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.628658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.628842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.628859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.629182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.629197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.629396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.629410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.629666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.629681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.629858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.629873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.630203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.630218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.630516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.630532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.630854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.630869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.631195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.631211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.631486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.631506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.631823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.631838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.632116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.632131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.632472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.632488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.632827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.632842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.633177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.633193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.633527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.633543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.633751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.633767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.796 [2024-11-20 18:07:09.634104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.796 [2024-11-20 18:07:09.634119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.796 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.634459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.634475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.634799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.634814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.635142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.635156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.635450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.635465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.635770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.635786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.635965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.635980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.636276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.636292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.636580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.636596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.636770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.636785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.637072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.637087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.637382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.637397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.637597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.637613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.637806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.637820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.638083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.638098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.638432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.638447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.638765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.638779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.639099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.639113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.639433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.639448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.639633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.639658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.639947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.639961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.640290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.640305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.640625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.640639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.640918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.640932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.641154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.641175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.641346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.641370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.641570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.641585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.641892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.641906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.642207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.642223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.642540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.642555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.642836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.642850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.643069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.643084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.643392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.643407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.643709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.643723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.643910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.643924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.644302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.644316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.644492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.644508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.644693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.644709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.797 [2024-11-20 18:07:09.644934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.797 [2024-11-20 18:07:09.644949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.797 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.645117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.645131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.645419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.645434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.645767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.645781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.646107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.646122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.646415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.646430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.646751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.646765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.647102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.647116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.647213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.647227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.647640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.647734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b08000b90 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.647908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.647947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b08000b90 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.648207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.648241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b08000b90 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.648597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.648615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.648988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.649002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.649239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.649253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.649534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.649548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.649876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.649891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.650218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.650234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.650552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.650567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.650850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.650864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.651152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.651173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.651491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.651505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.651679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.651701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.652018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.652033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.652360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.652376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.652674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.652689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.653016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.653031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.653373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.653389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.653700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.653714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.654038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.654053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.654348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.654363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.654537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.654552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.654834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.654849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.655162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.655178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.655558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.655573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.655896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.655911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.656234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.656250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.656575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.656590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.656868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.798 [2024-11-20 18:07:09.656883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.798 qpair failed and we were unable to recover it. 00:40:09.798 [2024-11-20 18:07:09.657164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.657181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.657520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.657534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.657859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.657874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.658177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.658193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.658493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.658510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.658795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.658810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.659012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.659026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.659224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.659240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.659428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.659444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.659729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.659744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.660056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.660074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.660416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.660432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.660758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.660772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.661049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.661063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.661390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.661405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.661687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.661701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.661891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.661915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.662245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.662260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.662575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.662589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.662901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.662915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.663195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.663210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.663505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.663520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.663695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.663709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.664012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.664027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.664321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.664337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.664625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.664640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.664947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.664962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.665300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.665316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.665633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.665647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.665925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.665940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.666256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.666271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.666600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.666614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.666888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.666903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.667242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.667258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.667432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.667447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.667779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.799 [2024-11-20 18:07:09.667794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:09.799 qpair failed and we were unable to recover it. 00:40:09.799 [2024-11-20 18:07:09.668074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.668090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.668410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.668426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.668709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.668723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.669044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.669058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.669382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.669396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.669743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.669757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.670071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.670085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.670413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.670429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.670754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.670769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.671096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.671112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.671431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.671446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.671673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.671687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.671966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.671980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.672169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.672184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.672507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.672521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.672801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.672818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.673104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.673118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.673443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.673458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.673770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.673784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.673843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.673856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.674192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.674207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.674486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.674500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.674673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.674687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.674987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.675002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.675376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.675391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.675594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.675609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.675936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.675951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.676228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.676242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.676426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.676441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.676763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.676778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.677053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.677067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.677253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.677277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.677457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.677472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.677783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.677798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.678117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.678131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.678480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.678495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.090 [2024-11-20 18:07:09.678764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.090 [2024-11-20 18:07:09.678778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.090 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.678958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.678972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.679300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.679315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.679483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.679499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.679784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.679798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.680111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.680125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.680299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.680317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.680461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.680475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.680779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.680793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.680962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.680977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.681317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.681332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.681532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.681546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.681869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.681883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.682211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.682226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.682391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.682405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.682721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.682736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.683080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.683095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.683373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.683388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.683668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.683683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.683798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.683812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.684130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.684144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.684451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.684466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.684618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.684633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.684814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.684829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.685120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.685135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.685454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.685469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.685784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.685799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.685975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.685990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.686307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.686322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.686632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.686646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.686973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.686987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.687263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.687278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.687559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.687574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.687890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.687904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.688193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.688208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.688508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.688522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.688846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.688860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.091 [2024-11-20 18:07:09.689039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.091 [2024-11-20 18:07:09.689053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.091 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.689360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.689375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.689682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.689696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.689891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.689906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.690074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.690087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.690409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.690425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.690744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.690758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.691076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.691090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.691281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.691303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.691489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.691503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.691832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.691849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.692131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.692145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.692365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.692380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.692674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.692689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.692962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.692977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.693272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.693287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.693589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.693603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.693919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.693934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.693996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.694011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.694346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.694361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.694420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.694435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.694731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.694745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.695058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.695072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.695377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.695392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.695707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.695722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.696035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.696050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.696366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.696381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.696699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.696713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.696902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.696916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.697204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.697233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.697514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.697528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.697855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.697869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.698214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.698229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.698522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.698536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.698838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.698852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.699170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.699185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.699481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.699495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.699812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.699827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.092 [2024-11-20 18:07:09.700143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.092 [2024-11-20 18:07:09.700163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.092 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.700460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.700475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.700761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.700775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.700963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.700987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.701218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.701235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.701411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.701426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.701813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.701828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.702101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.702115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.702309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.702325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.702648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.702662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.702941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.702955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.703141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.703157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.703446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.703461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.703755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.703770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.704086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.704100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.704414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.704429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.704736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.704750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.705060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.705074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.705396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.705411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.705724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.705738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.705933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.705949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.706283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.706298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.706603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.706618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.706906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.706920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.707229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.707244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.707589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.707604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.707886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.707901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.708259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.708274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.708444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.708459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.708771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.708786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.709099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.709114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.709403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.709417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.709605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.709619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.709916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.709930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.710205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.710220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.710439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.710453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.710759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.710774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.711048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.711063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.093 [2024-11-20 18:07:09.711382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.093 [2024-11-20 18:07:09.711397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.093 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.711681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.711695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.712020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.712041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.712239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.712254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.712611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.712626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.712906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.712920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.713152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.713172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.713480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.713496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.713671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.713694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.713964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.713978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.714299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.714315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.714621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.714635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.714909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.714923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.715204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.715219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.715406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.715422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.715725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.715739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.716020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.716034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.716365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.716380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.716654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.716668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.716866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.716886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.717208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.717223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.717511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.717525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.717703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.717717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.718014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.718028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.718324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.718339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.718630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.718644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.718966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.718981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.719182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.719196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.719519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.719533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.719802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.719817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.720136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.720150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.720434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.094 [2024-11-20 18:07:09.720448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.094 qpair failed and we were unable to recover it. 00:40:10.094 [2024-11-20 18:07:09.720626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.720644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.720945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.720960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.721273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.721288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.721562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.721577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.721853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.721867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.722142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.722163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.722440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.722454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.722743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.722757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.723028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.723042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.723336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.723352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.723671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.723685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.723966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.723983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.724267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.724283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.724604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.724619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.724945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.724960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.725268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.725283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.725600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.725614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.725895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.725909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.726090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.726105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.726417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.726432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.726747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.726761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.726945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.726961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.727174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.727190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.727496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.727510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.727843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.727858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.728135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.728150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.728474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.728489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.728765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.728779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.729096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.729111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.729396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.729412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.729589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.729605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.729934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.729949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.730244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.730259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.730441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.730456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.730731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.730745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.731034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.731049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.731226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.731244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.731572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.095 [2024-11-20 18:07:09.731587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.095 qpair failed and we were unable to recover it. 00:40:10.095 [2024-11-20 18:07:09.731765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.731785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.732092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.732106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.732431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.732446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.732723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.732737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.733055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.733069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.733252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.733269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.733557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.733571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.733872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.733886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.734062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.734077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.734293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.734308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.734498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.734516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.734848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.734862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.735010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.735024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.735208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.735223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.735559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.735573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.735862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.735876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.736199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.736214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.736556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.736571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.736852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.736867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.737187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.737203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.737486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.737501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.737823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.737837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.738017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.738033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.738355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.738371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.738545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.738560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.738880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.738894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.739091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.739108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.739434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.739450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.739627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.739641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.740017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.740031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.740319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.740334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.740671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.740686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.740959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.740973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.741258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.741275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.741645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.741659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.741948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.741962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.742182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.742199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.742534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.096 [2024-11-20 18:07:09.742549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.096 qpair failed and we were unable to recover it. 00:40:10.096 [2024-11-20 18:07:09.742751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.742767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.743087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.743101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.743428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.743443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.743721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.743739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.744073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.744088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.744383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.744398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.744726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.744740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.744840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.744854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.745166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.745182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.745503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.745518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.745716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.745733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.745934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.745948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.746231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.746248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.746530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.746544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.746726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.746741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.746956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.746971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.747267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.747282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.747635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.747649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.747929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.747943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.748256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.748271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.748600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.748614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.748938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.748953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.749120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.749135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.749448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.749463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.749795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.749810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.750142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.750157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.750506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.750520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.750840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.750855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.751193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.751209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.751300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.751316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.751596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.751613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.751917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.751932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.752147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.752175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.752562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.752576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.752859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.752874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.753208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.753223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.753467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.753481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.753652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.097 [2024-11-20 18:07:09.753665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.097 qpair failed and we were unable to recover it. 00:40:10.097 [2024-11-20 18:07:09.753980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.753994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.754224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.754239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.754529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.754544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.754858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.754872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.755199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.755214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.755552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.755566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.755879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.755894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.756177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.756192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.756527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.756542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.756864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.756878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.757219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.757234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.757522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.757537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.757816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.757830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.758007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.758022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.758348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.758363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.758655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.758669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.758994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.759008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.759207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.759224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.759531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.759546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.759721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.759736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.759932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.759947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.760267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.760283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.760576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.760591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.760881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.760896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.761210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.761225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.761406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.761421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.761590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.761604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.761786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.761803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.762133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.762148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.762457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.762472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.762795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.762810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.763000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.763023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.763355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.763371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.763539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.763559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.763866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.763880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.764200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.764215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.764384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.764399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.764740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.098 [2024-11-20 18:07:09.764754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.098 qpair failed and we were unable to recover it. 00:40:10.098 [2024-11-20 18:07:09.765028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.765042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.765366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.765380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.765696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.765710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.765769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.765783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.766116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.766130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.766363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.766377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.766555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.766576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.766761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.766776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.767103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.767118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.767453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.767468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.767801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.767815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.768102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.768117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.768288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.768303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.768476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.768490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.768789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.768804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.769132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.769146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.769466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.769481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.769848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.769863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.770205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.770221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.770511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.770525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.770714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.770730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.771011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.771026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.771349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.771370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.771427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.771441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.771744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.771758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.771818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.771833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.772004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.772019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.772331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.772345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.772655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.772670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.772959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.772973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.773144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.773162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.773404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.773418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.773733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.773748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.099 [2024-11-20 18:07:09.773915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.099 [2024-11-20 18:07:09.773929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.099 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.774208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.774223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.774514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.774529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.774704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.774718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.775096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.775111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.775391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.775407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.775698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.775713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.776050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.776064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.776359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.776374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.776650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.776664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.776985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.776999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.777313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.777329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.777701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.777716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.778039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.778054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.778394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.778409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.778731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.778746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.778926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.778941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.779291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.779306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.779540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.779555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.779828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.779843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.780139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.780154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.780479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.780494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.780672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.780686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.781023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.781037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.781313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.781328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.781612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.781627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.781942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.781956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.782255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.782270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.782560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.782574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.782888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.782902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.783214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.783232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.783550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.783565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.783728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.783743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.784060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.784075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.784393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.784408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.784725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.784739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.785015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.785030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.785313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.100 [2024-11-20 18:07:09.785328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.100 qpair failed and we were unable to recover it. 00:40:10.100 [2024-11-20 18:07:09.785687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.785702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.786029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.786045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.786340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.786355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.786660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.786674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.786991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.787005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.787321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.787336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.787536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.787550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.787897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.787912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.788267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.788282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.788477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.788492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.788760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.788775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.788970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.788984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.789312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.789327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.789513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.789528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.789912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.789927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.790131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.790146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.790470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.790485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.790815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.790829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.791144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.791163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.791552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.791567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.791750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.791765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.792081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.792095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.792428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.792443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.792721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.792736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.793064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.793078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.793368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.793383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.793554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.793568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.793848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.793862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.794183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.794198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.794527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.794541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.794917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.794931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.795208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.795224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.795403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.795417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.795718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.795733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.796063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.796077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.796371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.796386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.796665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.796680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.797006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.101 [2024-11-20 18:07:09.797021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.101 qpair failed and we were unable to recover it. 00:40:10.101 [2024-11-20 18:07:09.797298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.797314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.797636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.797652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.797980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.797995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.798189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.798206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.798503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.798517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.798860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.798874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.798931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.798945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.799232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.799247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.799530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.799545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.799788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.799803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.800080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.800094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.800432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.800447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.800614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.800629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.800826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.800842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.801133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.801148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.801481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.801496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.801814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.801829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.802036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.802052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.802357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.802372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.802691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.802705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.803031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.803045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.803366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.803381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.803706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.803724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.804012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.804027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.804364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.804380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.804700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.804716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.804920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.804935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.805200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.805215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.805489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.805503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.805822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.805836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.806008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.806024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.806359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.806373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.806654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.806669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.807009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.807024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.807372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.807387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.807710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.807724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.808061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.808075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.808378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.102 [2024-11-20 18:07:09.808393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.102 qpair failed and we were unable to recover it. 00:40:10.102 [2024-11-20 18:07:09.808709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.808725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.809047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.809062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.809242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.809257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.809532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.809547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.809869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.809883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.810056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.810071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.810273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.810288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.810618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.810632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.810834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.810849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.811186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.811201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.811536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.811550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.811864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.811878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.812199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.812215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.812413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.812427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.812751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.812766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.813045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.813060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.813339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.813354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.813671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.813685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.813968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.813982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.814311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.814326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.814530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.814544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.814736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.814753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.815067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.815082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.815176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.815191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.815675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.815765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.816227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.816282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.816606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.816637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.816888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.816918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.817405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.817496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.817891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.817929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0b0c000b90 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.818386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.818438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.818772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.818790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.818966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.818982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.819275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.819292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.103 qpair failed and we were unable to recover it. 00:40:10.103 [2024-11-20 18:07:09.819568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.103 [2024-11-20 18:07:09.819584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.819863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.819878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.820208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.820224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.820408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.820422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.820615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.820629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.820986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.821002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.821195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.821211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.821550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.821564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.821860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.821875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.822055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.822069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.822350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.822365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.822679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.822693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.822860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.822874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.823214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.823229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.823588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.823602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.823878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.823893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.824206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.824221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.824521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.824535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.824756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.824775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.824945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.824960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.825226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.825242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.825448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.825462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.825648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.825668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.825969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.825983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.826262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.826277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.826555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.826570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.826847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.826861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.827144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.827163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.827362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.827379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.827557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.827571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.827841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.827856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.828040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.828054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.828241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.828257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.828649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.828664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.828850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.828865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.829198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.829213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.829528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.829542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.104 [2024-11-20 18:07:09.829864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.104 [2024-11-20 18:07:09.829879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.104 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.830202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.830217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.830500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.830515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.830806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.830821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.831140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.831154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.831459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.831473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.831798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.831813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.832152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.832173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.832357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.832376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.832721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.832737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.833059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.833074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.833243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.833259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.833535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.833549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.833758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.833772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.833970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.833985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.834283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.834298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.834479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.834493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.834866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.834881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.835156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.835176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.835492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.835506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.835820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.835836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.836130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.836145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.836356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.836372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.836687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.836702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.836976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.836990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.837312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.837327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.837598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.837613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.837934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.837950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.838127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.838142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.838454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.838470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.838775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.838790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.839120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.839135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.839421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.839436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.839670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.839686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.839994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.840009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.840312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.840327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.840615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.840630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.840906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.840920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.841232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.105 [2024-11-20 18:07:09.841247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.105 qpair failed and we were unable to recover it. 00:40:10.105 [2024-11-20 18:07:09.841428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.841443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.841672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.841686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.841989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.842003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.842327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.842343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.842665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.842680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.842959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.842975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.843254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.843269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.843486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.843501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.843705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.843720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.844036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.844050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.844401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.844419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.844738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.844753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.845078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.845092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.845377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.845393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.845712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.845726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.845937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.845951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.846175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.846190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.846523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.846538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.846746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.846761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.847105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.847119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.847282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.847297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.847618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.847632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.847909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.847923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.848225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.848240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.848449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.848464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.848736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.848750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.849071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.849086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.849399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.849414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.849739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.849754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.850031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.850046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.850341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.850356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.850637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.850651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.850987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.851001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.851327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.851342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.851526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.851540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.851743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.851758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.852132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.852146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.852457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.106 [2024-11-20 18:07:09.852472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.106 qpair failed and we were unable to recover it. 00:40:10.106 [2024-11-20 18:07:09.852793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.852808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.853146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.853174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.853354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.853368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.853692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.853706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.854031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.854046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.854372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.854387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.854703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.854718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.854887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.854902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.855230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.855245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.855519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.855533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.855810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.855824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.856136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.856151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.856470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.856484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.856802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.856817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.857001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.857018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.857340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.857358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.857647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.857662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.857841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.857855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.858036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.858051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.858360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.858376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.858692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.858706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.858886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.858900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.859232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.859247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.859526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.859541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.859821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.859836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.860189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.860205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.860505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.860519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.860703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.860717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.861044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.861058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.861359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.861374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.861689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.861703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.862018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.862032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.862317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.862332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.862532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.862546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.862923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.862937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.863136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.863150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.863377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.863391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.863742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.107 [2024-11-20 18:07:09.863757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.107 qpair failed and we were unable to recover it. 00:40:10.107 [2024-11-20 18:07:09.863935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.863950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.864271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.864286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.864506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.864522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.864860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.864874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.865047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.865062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.865391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.865406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.865726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.865740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.866068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.866083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.866255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.866270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.866560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.866575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.866756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.866772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.866954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.866968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.867144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.867162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.867530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.867545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.867823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.867837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.868156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.868175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.868499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.868513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.868840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.868854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.869028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.869043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.869347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.869362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.869694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.869709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.870026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.870041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.870313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.870328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.870601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.870616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.870812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.870825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.871156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.871175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.871458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.871472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.871783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.871797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.872112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.872127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.872451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.872466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.872768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.872783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.873125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.873140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.873353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.873370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.873750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.108 [2024-11-20 18:07:09.873765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.108 qpair failed and we were unable to recover it. 00:40:10.108 [2024-11-20 18:07:09.873935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.873952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.874180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.874198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.874543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.874558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.874758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.874772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.875091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.875106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.875397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.875413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.875600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.875616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.875785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.875799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.876072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.876086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.876407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.876422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.876700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.876714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.876990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.877004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.877316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.877330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.877497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.877510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.877701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.877716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.877889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.877903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.878219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.878234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.878450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.878465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.878523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.878536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.878842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.878856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.879174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.879189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.879370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.879385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.879682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.879696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.879987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.880001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.880326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.880340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.880509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.880523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.880739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.880754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.881066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.881081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.881254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.881268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.881639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.881653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.881938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.881952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.882274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.882290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.882476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.882491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.882815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.882830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.883155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.883176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.883459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.883473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.883795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.883812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.884151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.884170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.109 qpair failed and we were unable to recover it. 00:40:10.109 [2024-11-20 18:07:09.884469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.109 [2024-11-20 18:07:09.884483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.884803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.884818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.885138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.885152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.885446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.885460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.885795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.885810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.886141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.886156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.886235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.886252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.886310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.886325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.886651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.886665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.886984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.886999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.887291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.887307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.887621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.887635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.887981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.887996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.888276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.888291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.888619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.888633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.888917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.888931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.889265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.889280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.889562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.889577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.889880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.889895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.890078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.890094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.890405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.890421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.890608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.890623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.890795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.890809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.891129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.891143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.891428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.891443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.891635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.891649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.891989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.892003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.892277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.892292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.892613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.892627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.892908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.892922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.893215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.893231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.893409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.893424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.893589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.893603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.893901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.893915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.894229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.894244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.894580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.894594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.894883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.894898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.895177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.895193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.110 [2024-11-20 18:07:09.895485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.110 [2024-11-20 18:07:09.895500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.110 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.895811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.895831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.896142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.896156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.896512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.896526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.896805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.896820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.897141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.897156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.897439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.897453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.897725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.897739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.898056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.898071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.898364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.898380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.898715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.898730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.899050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.899064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.899403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.899418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.899720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.899734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.900054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.900068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.900383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.900398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.900575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.900590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.900784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.900799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.901105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.901119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.901477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.901492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.901816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.901831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.902175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.902191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.902249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.902265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.902575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.902590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.902770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.902786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.903106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.903120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.903396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.903411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.903733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.903747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.904062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.904080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.904362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.904377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.904692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.904706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.904983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.904998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.905171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.905185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.905528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.905543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.905835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.905849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.906170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.906186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.906525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.906539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.906712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.906727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.111 [2024-11-20 18:07:09.906973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.111 [2024-11-20 18:07:09.906987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.111 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.907270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.907285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.907459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.907474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.907861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.907876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.908153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.908173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.908498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.908512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.908691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.908706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.908883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.908898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.909084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.909100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.909301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.909316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.909610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.909625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.909800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.909815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.910023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.910037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.910320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.910337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.910607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.910621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.910789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.910802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.911097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.911112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.911430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.911445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.911633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.911649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.911994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.912008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.912286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.912301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.912581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.912595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.912917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.912932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.913230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.913245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.913538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.913552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.913835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.913849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.914040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.914055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.914432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.914447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.914614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.914629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.914796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.914811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.914990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.915005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.915315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.915333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.915651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.915665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.915935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.915951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.916238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.916253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.916577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.916592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.916929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.916943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.917225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.917239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.917528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.112 [2024-11-20 18:07:09.917543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.112 qpair failed and we were unable to recover it. 00:40:10.112 [2024-11-20 18:07:09.917826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.917841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.918204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.918220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.918557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.918572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.918857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.918872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.919156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.919176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.919475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.919490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.919685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.919701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.920039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.920053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.920386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.920401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.920637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.920652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.920821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.920835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.921135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.921149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.921464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.921478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.921674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.921688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.922082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.922096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.922376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.922391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.922665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.922679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.922995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.923009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.923314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.923329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.923504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.923521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.923847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.923862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.924195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.924210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.924494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.924508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.924787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.924803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.925099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.925113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.925247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.925262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.925559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.925573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.925888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.925902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.926218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.926235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.926556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.926571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.926847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.926861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.927139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.927153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.927326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.927341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.927571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.113 [2024-11-20 18:07:09.927588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.113 qpair failed and we were unable to recover it. 00:40:10.113 [2024-11-20 18:07:09.927907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.927922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.928198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.928213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.928541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.928555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.928820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.928835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.929170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.929186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.929503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.929517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.929837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.929852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.930176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.930191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.930519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.930533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.930734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.930749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.931082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.931096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.931384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.931398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.931724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.931739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.932103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.932118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.932295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.932311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.932493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.932507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.932809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.932825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.933151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.933172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.933498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.933514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.933710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.933724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.934033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.934048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.934407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.934422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.934737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.934752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.935082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.935096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.935426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.935441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.935746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.935760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.936085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.936102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.936423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.936438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.936784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.936799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.936996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.937010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.937205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.937220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.937420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.937436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.937607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.937622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.937896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.937910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.938232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.938247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.938378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.938392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.938571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.938585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.938775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.114 [2024-11-20 18:07:09.938789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.114 qpair failed and we were unable to recover it. 00:40:10.114 [2024-11-20 18:07:09.939113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.939127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.939412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.939427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.939748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.939763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.940041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.940055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.940248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.940271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.940599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.940614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.940893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.940907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.941219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.941234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.941538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.941552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.941870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.941884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.942192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.942207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.942534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.942549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.942825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.942840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.943157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.943177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.943470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.943485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.943808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.943826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.943991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.944005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.944317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.944332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.944662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.944677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.945021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.945035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.945315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.945331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.945634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.945648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.945922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.945936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.946253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.946268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.946449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.946463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.946686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.946700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.946987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.947001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.947324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.947339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.947619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.947634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.947821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.947847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.948053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.948068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.948425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.948441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.948718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.948732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.949059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.949073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.949379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.949393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.949730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.115 [2024-11-20 18:07:09.949744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.115 qpair failed and we were unable to recover it. 00:40:10.115 [2024-11-20 18:07:09.950067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.950081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.950281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.950299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.950480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.950495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.950808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.950823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.950997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.951012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.951317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.951332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.951657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.951672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.951991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.952007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.952321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.952336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.952653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.952668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.952948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.952962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.953290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.953305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.953586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.953600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.953920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.953935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.954272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.954287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.954572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.954586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.954773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.954788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.955055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.955070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.955361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.955376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.955655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.955669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.955855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.955877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.956238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.956253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.956439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.956453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.956822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.956837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.957115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.957130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.957521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.957537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.957851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.957865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.958150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.958177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.958508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.958522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.958747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.958764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.959088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.959102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.959330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.959345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.959629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.959644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.959960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.959974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.960295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.960310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.960627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.960641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.960916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.960930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.961267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.116 [2024-11-20 18:07:09.961281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.116 qpair failed and we were unable to recover it. 00:40:10.116 [2024-11-20 18:07:09.961576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.961590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.961902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.961916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.962229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.962244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.962569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.962584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.962788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.962802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.963112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.963126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.963417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.963432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.963626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.963641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.963972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.963986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.964161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.964177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.964364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.964380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.964573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.964589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.964907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.964922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.965116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.965130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.965432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.965447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.965772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.965787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.966059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.966073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.966365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.966380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.966708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.966722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.967085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.967100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.967460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.967475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.967792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.967806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.968125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.968139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.968448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.968463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.968788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.968802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.969125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.969139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.969464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.969479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.969802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.969816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.970144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.970166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.970469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.970484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.970706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.970720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.970895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.970909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.971242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.117 [2024-11-20 18:07:09.971257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.117 qpair failed and we were unable to recover it. 00:40:10.117 [2024-11-20 18:07:09.971534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.118 [2024-11-20 18:07:09.971547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.118 qpair failed and we were unable to recover it. 00:40:10.118 [2024-11-20 18:07:09.971867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.118 [2024-11-20 18:07:09.971881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.118 qpair failed and we were unable to recover it. 00:40:10.118 [2024-11-20 18:07:09.972200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.118 [2024-11-20 18:07:09.972215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.118 qpair failed and we were unable to recover it. 00:40:10.118 [2024-11-20 18:07:09.972502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.118 [2024-11-20 18:07:09.972516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.118 qpair failed and we were unable to recover it. 00:40:10.118 [2024-11-20 18:07:09.972832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.118 [2024-11-20 18:07:09.972847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.118 qpair failed and we were unable to recover it. 00:40:10.118 [2024-11-20 18:07:09.973038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.118 [2024-11-20 18:07:09.973053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.118 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.973234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.973251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.973546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.973561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.973878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.973892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.974206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.974221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.974517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.974531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.974847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.974861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.975145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.975164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.975514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.975529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.975889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.975903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.976228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.976244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.976564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.976578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.976898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.976917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.977118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.977133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.977491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.977506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.977788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.977803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.977988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.978013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.978321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.978336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.978665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.978680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.978992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.979006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.979326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.979341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.979511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.979526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.979900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.979915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.980191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.447 [2024-11-20 18:07:09.980206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.447 qpair failed and we were unable to recover it. 00:40:10.447 [2024-11-20 18:07:09.980397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.980413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.980606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.980621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.980937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.980952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.981277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.981292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.981604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.981619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.982027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.982045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.982373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.982389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.982672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.982686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.983018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.983035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.983250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.983266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.983553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.983568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.983772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.983790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.984146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.984171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.984501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.984516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.984801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.984818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.985145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.985166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.985530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.985545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.985722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.985737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.985969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.985984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.986270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.986286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.986480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.986496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.986698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.986713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.986941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.986956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.987169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.987185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.987523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.987538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.987729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.987743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.988043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.988057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.988373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.988388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.988569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.988585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.988908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.988927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.989143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.989162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.989466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.989480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.989805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.989820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.990172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.990188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.990495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.990509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.990704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.990719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.991001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.991015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.991343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.448 [2024-11-20 18:07:09.991358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.448 qpair failed and we were unable to recover it. 00:40:10.448 [2024-11-20 18:07:09.991639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.991654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.991977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.991992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.992309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.992324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.992646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.992660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.992864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.992880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.993218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.993233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.993545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.993559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.993883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.993897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.994176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.994191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.994487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.994501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.994808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.994822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.995133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.995148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.995324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.995339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.995662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.995677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.995867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.995882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.996170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.996186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.996384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.996399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.996697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.996711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.997029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.997047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.997370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.997385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.997650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.997664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.998070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.998085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.998289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.998306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.998592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.998606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.998884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.998898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.999120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.999135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.999383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.999397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:09.999713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:09.999728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:10.000005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:10.000021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:10.000319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:10.000334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:10.000624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:10.000638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:10.000990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:10.001005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:10.001369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:10.001385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:10.001568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:10.001585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:10.001821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:10.001837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:10.002708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:10.002728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:10.003096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:10.003113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.449 [2024-11-20 18:07:10.003467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.449 [2024-11-20 18:07:10.003484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.449 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.003822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.003838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.004033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.004049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.004256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.004273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.004486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.004502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.004703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.004719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.005015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.005032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.005375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.005390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.005491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.005506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.005800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.005815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.006230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.006245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.006529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.006543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.006876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.006890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.007180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.007195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.007479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.007493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.007812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.007826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.008135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.008149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.008370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.008384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.008701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.008716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.008944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.008959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.009265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.009290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.009617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.009632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.009973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.009992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.010313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.010329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.010648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.010662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.011001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.011015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.011435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.011450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.011676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.011691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.012045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.012059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.012386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.012401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.012576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.012591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.012907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.012922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.013118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.013133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.013413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.013428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.013746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.013761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.014079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.014094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.014284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.014300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.450 [2024-11-20 18:07:10.014615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.450 [2024-11-20 18:07:10.014630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.450 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.014801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.014816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.015016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.015031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.015362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.015377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.015556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.015571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.015896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.015911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.016194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.016209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.016414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.016428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.016705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.016719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.016897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.016913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.017248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.017263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.017544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.017559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.017731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.017752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.017918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.017933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.018141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.018155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.018492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.018506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.018826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.018841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.019164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.019180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.019445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.019459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.019661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.019676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.019995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.020011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.020379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.020394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.020741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.020755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.021054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.021069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.021386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.021401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.021583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.021597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.021834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.021851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.022147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.022167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.022508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.022524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.022813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.022827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.023111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.023126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.023304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.023320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.023704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.023719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.024072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.024086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.024420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.024435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.024622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.024637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.451 qpair failed and we were unable to recover it. 00:40:10.451 [2024-11-20 18:07:10.024838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.451 [2024-11-20 18:07:10.024853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.025155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.025176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.025395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.025409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.025581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.025596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.025775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.025790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.026084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.026099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.026414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.026429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.026782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.026797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.027102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.027116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.027453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.027468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.027663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.027679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.027968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.027984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.028314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.028329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.028629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.028643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.028968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.028984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.029320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.029334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.029638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.029653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.030013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.030031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.030364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.030379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.030561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.030576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.030749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.030764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.030878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.030894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.031075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.031090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.031402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.031418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.031513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.031529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.031829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.031844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.032144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.032169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.032474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.032489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.032798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.032813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.033079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.033095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.033409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.033425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.033758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.033773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.034103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.034118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.034448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.034463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.452 [2024-11-20 18:07:10.034795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.452 [2024-11-20 18:07:10.034809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.452 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.035137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.035152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.035480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.035495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.035840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.035855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.036030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.036046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.036385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.036400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.036733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.036748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.037085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.037100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.037301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.037317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.037533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.037548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.037938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.037955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.038285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.038300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.038583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.038597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.038832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.038846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.039134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.039150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.039456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.039471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.039753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.039767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.040070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.040085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.040403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.040418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.040708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.040723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.041054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.041069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.041367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.041383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.041710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.041726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.041931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.041946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.042148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.042168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.042381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.042396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.042606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.042622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.042961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.042976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.043308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.043324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.043616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.043630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.043832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.043847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.044138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.044153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.044495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.044510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.044836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.044851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.045183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.045198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.045502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.045517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.045748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.045763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.045975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.453 [2024-11-20 18:07:10.045992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.453 qpair failed and we were unable to recover it. 00:40:10.453 [2024-11-20 18:07:10.046280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.046295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.046624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.046639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.046828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.046843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.047142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.047156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.047342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.047364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.047635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.047650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.047931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.047945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.048180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.048195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.048538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.048554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.048830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.048848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.049030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.049055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.049408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.049423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.049737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.049752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.050102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.050121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.050349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.050365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.050687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.050702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.051029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.051045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.051255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.051271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.051518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.051532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.051830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.051845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.052196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.052211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.052390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.052405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.052658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.052673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.052998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.053013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.053364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.053380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.053700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.053716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.054058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.054073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.054410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.054425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.054605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.054620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.054963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.054978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.055165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.055180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.055497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.055513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.055714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.055728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.056006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.056021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.056237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.056252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.056439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.056453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.056798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.056813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.057001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.057016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.454 [2024-11-20 18:07:10.057334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.454 [2024-11-20 18:07:10.057349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.454 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.057584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.057598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.057926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.057940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.058282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.058297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.058598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.058613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.058799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.058814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.059134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.059149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.059455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.059470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.059826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.059841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.060048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.060061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.060392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.060407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.060615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.060629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.060966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.060980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.061273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.061288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.061612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.061626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.061818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.061833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.062188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.062203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.062571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.062586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.062790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.062805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.062990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.063004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.063311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.063327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.063651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.063666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.063844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.063867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.064079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.064093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.064284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.064300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.064505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.064521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.064715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.064738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.065026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.065041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.065252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.065267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.065360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.065375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.065671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.065686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.065876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.065892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.066218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.066233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.066417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.066432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.066755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.066770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.067113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.067128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.067434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.067449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.067634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.067657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.455 [2024-11-20 18:07:10.067938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-11-20 18:07:10.067953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.455 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.068138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.068153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.068215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.068229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.068526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.068541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.068816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.068831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.069174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.069193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.069490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.069505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.069688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.069702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.070031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.070045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.070445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.070460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.070746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.070760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.071008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.071022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.071243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.071258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.071370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.071384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.071667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.071681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.071971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.071986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.072189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.072211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.072447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.072461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.072833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.072848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.073132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.073147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.073522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.073537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.073725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.073745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.073926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.073941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.074252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.074267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.074458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.074473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.074653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.074668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.075014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.075029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.075217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.075232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.075589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.075604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.075824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.075839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.076015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.076030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.076368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.076383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.076758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.076773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.456 [2024-11-20 18:07:10.077129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-11-20 18:07:10.077143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.456 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.077461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.077477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.077769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.077784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.078117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.078132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.078511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.078526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.078854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.078869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.078928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.078942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.079043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.079057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.079451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.079467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.079804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.079818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.080019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.080034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.080354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.080369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.080682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.080696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.080903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.080919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.081210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.081225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.081549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.081563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.081778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.081793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.082123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.082138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.082342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.082357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.082663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.082677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.082886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.082900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.083106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.083121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.083445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.083461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.083748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.083763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.084102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.084117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.084425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.084440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.084775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.084790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.085096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.085112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.085324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.085339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.085668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.085683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.085873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.085894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.086220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.086235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.086618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.086632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.086955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.086969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.087178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.087195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.087396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.087411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.087745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.087760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.087962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.087976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.457 [2024-11-20 18:07:10.088313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-11-20 18:07:10.088328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.457 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.088667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.088682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.088968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.088986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.089318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.089333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.089531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.089546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.089840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.089855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.090183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.090198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.090540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.090554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.090733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.090749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.091024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.091039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.091111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.091126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.091296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.091311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.091645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.091660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.091995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.092009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.092302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.092318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.092637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.092651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.092852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.092868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.093168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.093183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.093381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.093397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.093685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.093700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.094021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.094035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.094383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.094398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.094598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.094614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.094813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.094828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.095028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.095044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.095344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.095359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.095661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.095676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.096000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.096015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.096332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.096348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.096681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.096696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.097029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.097044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.097373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.097388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.097710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.097725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.098055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.098069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.098273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.098288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.098579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.098593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.098920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.098934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.099214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.099229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.458 qpair failed and we were unable to recover it. 00:40:10.458 [2024-11-20 18:07:10.099529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.458 [2024-11-20 18:07:10.099543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.099746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.099760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.099946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.099960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.100286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.100301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.100587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.100602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.100794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.100811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.101117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.101131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.101533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.101549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.101761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.101776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.102130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.102145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.102487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.102502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.102832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.102847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.103189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.103204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.103506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.103520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.103843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.103857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.104183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.104198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.104388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.104403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.104742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.104756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.104953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.104968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.105296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.105311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.105611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.105625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.105950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.105964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.106276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.106291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.106594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.106608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.106892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.106906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.107239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.107254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.107567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.107583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.107642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.107657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.107986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.108002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.108172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.108188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.108594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.108608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.108799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.108814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.109140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.109162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.109358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.109372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.109767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.109781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.110067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.110081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.110266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.110281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.110464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.110478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.110810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.459 [2024-11-20 18:07:10.110824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.459 qpair failed and we were unable to recover it. 00:40:10.459 [2024-11-20 18:07:10.111152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.111171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.111481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.111495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.111830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.111844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.112141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.112156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.112479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.112494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.112838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.112853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.113195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.113211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.113419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.113433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.113766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.113781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.114061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.114076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.114467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.114482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.114815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.114829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.115123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.115137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.115506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.115521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.115842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.115857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.116157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.116183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.116465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.116479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.116848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.116862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.117070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.117085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.117435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.117450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.117742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.117757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.118106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.118120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.118445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.118460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.118773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.118788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.118971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.118986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.119266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.119281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.119587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.119601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.119929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.119944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.120145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.120164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.120468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.120481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.120804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.120819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.121007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.121027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.121308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.121322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.121386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.121400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.121759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.121777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.122065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.122079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.122282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.460 [2024-11-20 18:07:10.122298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.460 qpair failed and we were unable to recover it. 00:40:10.460 [2024-11-20 18:07:10.122621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.122635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.122955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.122969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.123288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.123302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.123627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.123642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.123878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.123893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.124190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.124204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.124265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.124279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.124493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.124507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.124845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.124860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.125153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.125175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.125506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.125520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.125845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.125859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.126142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.126156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.126484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.126499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.126740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.126755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.127078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.127093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.127391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.127406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.127734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.127749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.128033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.128048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.128231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.128246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.128307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.128322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.128654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.128669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.128955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.128969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.129320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.129335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.129639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.129657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.129981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.129995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.130340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.130355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.130685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.130699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.130984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.130999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.131347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.131363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.131700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.131715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.132005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.132020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.132378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.132393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.132677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.461 [2024-11-20 18:07:10.132691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.461 qpair failed and we were unable to recover it. 00:40:10.461 [2024-11-20 18:07:10.133027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.133041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.133279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.133293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.133473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.133488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.133775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.133789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.133982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.133998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.134074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.134088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.134419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.134433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.134760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.134774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.135095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.135109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.135499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.135514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.135796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.135810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.136014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.136030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.136367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.136382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.136575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.136589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.136930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.136944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.137229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.137244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.137463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.137477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.137810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.137824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.138043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.138057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.138282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.138297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.138693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.138707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.138907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.138922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.139270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.139286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.139581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.139595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.139789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.139804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.140099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.140113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.140441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.140456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.140510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.140524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.140841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.140855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.141141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.141155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.141343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.141365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.141719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.141737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.142022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.142036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.142323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.142338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.142685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.142701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.143046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.143060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.462 [2024-11-20 18:07:10.143372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.462 [2024-11-20 18:07:10.143388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.462 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.143713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.143727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.144046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.144060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.144433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.144449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.144785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.144801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.145090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.145104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.145452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.145468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.145656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.145671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.146016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.146031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.146233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.146254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.146542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.146556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.146874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.146888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.147084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.147100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.147279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.147295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.147591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.147606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.147787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.147802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.148089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.148103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.148413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.148428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.148754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.148768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.149098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.149112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.149433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.149448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.149651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.149665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.149840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.149854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.150177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.150192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.150501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.150515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.150703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.150718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.151060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.151075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.151317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.151332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.151623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.151638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.151968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.151982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.152282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.152297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.152514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.152528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.152733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.152747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.152947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.152962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.153250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.153265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.153587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.153601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.153924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.153938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.154227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.154242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.154339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.463 [2024-11-20 18:07:10.154353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.463 qpair failed and we were unable to recover it. 00:40:10.463 [2024-11-20 18:07:10.154655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.154669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.154989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.155003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.155181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.155196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.155291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.155305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.155634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.155648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.155823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.155838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.156018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.156033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.156296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.156311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.156605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.156619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.156985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.156999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.157423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.157438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.157722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.157736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.158073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.158088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.158437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.158453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.158797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.158811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.159153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.159182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.159310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.159325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.159621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.159635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.159860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.159874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.160206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.160222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.160432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.160447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.160775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.160790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.161072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.161086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.161407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.161422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.161742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.161759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.162095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.162109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.162178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.162192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.162402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.162416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.162708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.162722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.163093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.163107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.163442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.163457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.163648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.163664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.163993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.164008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.164315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.164330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.164504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.164519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.164856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.164870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.165182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.165197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.165499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.165514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.464 qpair failed and we were unable to recover it. 00:40:10.464 [2024-11-20 18:07:10.165796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.464 [2024-11-20 18:07:10.165811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.166124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.166138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.166440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.166455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.166644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.166665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.166947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.166961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.167317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.167332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.167638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.167653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.167972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.167987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.168178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.168194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.168368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.168383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.168579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.168593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.168839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.168853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:10.465 [2024-11-20 18:07:10.169156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.169175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:40:10.465 [2024-11-20 18:07:10.169400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.169416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:10.465 [2024-11-20 18:07:10.169803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.169818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:10.465 [2024-11-20 18:07:10.170028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.170043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:10.465 [2024-11-20 18:07:10.170364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.170379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.170702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.170717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.170893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.170908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.171236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.171252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.171522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.171536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.171848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.171862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.172062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.172077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.172443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.172459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.172738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.172752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.173081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.173095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.173397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.173412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.173724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.173738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.173937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.173954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.174227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.174242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.174541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.174556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.174772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.174788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.465 [2024-11-20 18:07:10.175129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.465 [2024-11-20 18:07:10.175144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.465 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.175479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.175494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.175808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.175823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.176121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.176135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.176444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.176460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.176735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.176750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.176946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.176961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.177295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.177312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.177596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.177610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.177930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.177947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.178138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.178154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.178359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.178374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.178548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.178563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.178891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.178906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.179099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.179114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.179359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.179374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.179698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.179712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.179991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.180005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.180330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.180346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.180749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.180766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.181051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.181069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.181377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.181393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.181570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.181584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.181895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.181910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.182232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.182248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.182596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.182613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.182942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.182956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.183276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.183292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.183632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.183648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.183964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.183979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.184270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.184285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.184605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.184620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.184942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.184956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.185236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.185251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.185430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.185445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.185825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.185840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.186120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.466 [2024-11-20 18:07:10.186135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.466 qpair failed and we were unable to recover it. 00:40:10.466 [2024-11-20 18:07:10.186348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.186365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.186683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.186697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.187024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.187038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.187331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.187346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.187688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.187704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.188028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.188043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.188432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.188448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.188635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.188649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.188842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.188857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.189185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.189201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.189375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.189393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.189735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.189749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.190029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.190043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.190246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.190261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.190510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.190525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.190799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.190815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.190989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.191004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.191348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.191362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.191534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.191549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.191880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.191895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.192098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.192112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.192404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.192419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.192479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.192495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.192673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.192687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.192968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.192983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.193307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.193323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.193513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.193528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.193801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.193816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.194139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.194154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.194487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.194501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.194782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.194797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.194984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.195005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.195339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.195354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.195639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.195653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.195935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.195950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.196273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.196288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.196609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.196623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.196749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.467 [2024-11-20 18:07:10.196763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.467 qpair failed and we were unable to recover it. 00:40:10.467 [2024-11-20 18:07:10.196953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.196967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.197141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.197157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.197521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.197537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.197727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.197743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.198055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.198069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.198353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.198370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.198697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.198712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.198893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.198909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.199116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.199131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.199459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.199474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.199805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.199823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.200166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.200181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.200387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.200404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.200734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.200752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.201074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.201088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.201460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.201475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.201666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.201681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.202000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.202015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.202382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.202397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.202676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.202690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.203014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.203029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.203371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.203386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.203722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.203737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.203960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.203975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.204295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.204310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.204593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.204609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.204928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.204944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.205135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.205150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.205534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.205548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.205833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.205848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.206179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.206196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.206498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.206513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.206831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.206846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.207148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.207167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.207437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.207452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.207633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.207647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.207974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.207988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.208310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.468 [2024-11-20 18:07:10.208325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.468 qpair failed and we were unable to recover it. 00:40:10.468 [2024-11-20 18:07:10.208613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.208627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.208814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.208837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.209018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.209037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.209353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.209368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.209697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.209711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:10.469 [2024-11-20 18:07:10.209929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.209945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.210131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.210147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:10.469 [2024-11-20 18:07:10.210474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.210491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.469 [2024-11-20 18:07:10.210807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.210823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:10.469 [2024-11-20 18:07:10.211116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.211132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.211444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.211459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.211784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.211799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.211982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.211997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.212357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.212372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.212638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.212652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.212988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.213002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.213188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.213211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.213518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.213532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.213856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.213870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.214198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.214213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.214418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.214432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.214632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.214646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.214944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.214959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.215138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.215153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.215484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.215499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.215822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.215836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.216117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.216131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.216445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.216460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.216745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.216761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.217084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.217099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.217486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.217501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.217802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.217817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.218137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.218151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.218343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.218357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.218539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.218553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.218728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.218745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.469 qpair failed and we were unable to recover it. 00:40:10.469 [2024-11-20 18:07:10.218949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.469 [2024-11-20 18:07:10.218963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.219260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.219276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.219553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.219568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.219777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.219792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.220179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.220195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.220494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.220513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.220833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.220848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.221177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.221193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.221509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.221524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.221837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.221853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.222186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.222201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.222488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.222502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.222837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.222851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.223134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.223150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.223477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.223492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.223816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.223832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.224168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.224184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.224506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.224522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.224841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.224856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.225150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.225170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.225510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.225526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.225815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.225830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.226148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.226167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.226488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.226503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.226785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.226799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.227117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.227132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.227357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.227373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.227706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.227721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.228065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.228080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.228485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.228500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.228674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.228689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 Malloc0 00:40:10.470 [2024-11-20 18:07:10.229080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.229095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.229308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.229326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 [2024-11-20 18:07:10.229387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.229402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.470 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.470 [2024-11-20 18:07:10.229706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.470 [2024-11-20 18:07:10.229720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.470 qpair failed and we were unable to recover it. 00:40:10.471 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:40:10.471 [2024-11-20 18:07:10.230065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.230080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.471 [2024-11-20 18:07:10.230398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.230413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:10.471 [2024-11-20 18:07:10.230744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.230759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.230946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.230960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.231269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.231284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.231476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.231497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.231831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.231846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.232127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.232141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.232513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.232529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.232894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.232912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.233196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.233211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.233547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.233561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.233884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.233898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.234176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.234191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.234378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.234393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.234712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.234726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.235051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.235065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.235401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.235416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.235741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.235755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.236078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.236093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.236179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:10.471 [2024-11-20 18:07:10.236393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.236408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.236695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.236709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.236892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.236906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.237265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.237281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.237461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.237476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.237782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.237797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.238197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.238213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.238282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.238298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.238362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.238376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.238567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.238581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.238895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.238909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.239100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.239115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.239469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.239484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.239655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.239670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.240003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.240018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.240364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.471 [2024-11-20 18:07:10.240379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.471 qpair failed and we were unable to recover it. 00:40:10.471 [2024-11-20 18:07:10.240667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.240685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.241013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.241028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.241205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.241221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.241429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.241443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.241767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.241781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.241965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.241988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.242312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.242327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.242612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.242627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.242948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.242963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.243176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.243192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.243528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.243543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.243825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.243840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.244121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.244135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.244445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.244460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.244647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.244662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.245049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.245063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.472 [2024-11-20 18:07:10.245364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.245380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.245568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.245584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:10.472 [2024-11-20 18:07:10.245671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.245686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.245907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.245921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.472 [2024-11-20 18:07:10.246225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.246240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:10.472 [2024-11-20 18:07:10.246436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.246450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.246808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.246822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.247147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.247166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.247361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.247376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.247745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.472 [2024-11-20 18:07:10.247762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.472 qpair failed and we were unable to recover it. 00:40:10.472 [2024-11-20 18:07:10.248041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.248056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.248376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.248390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.248690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.248705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.249046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.249061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.249391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.249406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.249727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.249742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.250063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.250078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.250280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.250296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.250617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.250631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.250990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.251004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.251343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.251358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.251676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.251690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.251882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.251897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.252229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.252244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.252569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.252584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.252864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.252878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.253198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.253214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.253419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.253434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.253751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.253766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.253951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.253965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.254187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.254202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.254525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.254539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.254736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.254750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.254969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.254984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.255223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.255238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.255555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.255569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.255895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.255910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.256091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.256105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.256442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.256456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.256731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.256745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.257071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.257085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.473 [2024-11-20 18:07:10.257287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.257302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:10.473 [2024-11-20 18:07:10.257626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.257641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.473 [2024-11-20 18:07:10.257975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.257990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:10.473 [2024-11-20 18:07:10.258300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.258315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.473 [2024-11-20 18:07:10.258498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.473 [2024-11-20 18:07:10.258514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.473 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.258868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.258882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.259061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.259075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.259287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.259306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.259568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.259583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.259770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.259786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.259997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.260012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.260239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.260254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.260564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.260578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.260774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.260789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.261018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.261032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.261325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.261339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.261627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.261641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.261968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.261983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.262276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.262291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.262580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.262594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.262870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.262885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.263206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.263221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.263330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.263346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.263645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.263659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.263935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.263949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.264261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.264275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.264627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.264643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.264970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.264985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.265313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.265328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.265606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.265621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.265952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.265966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.266227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.266242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.266530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.266544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.266868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.266882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.267170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.267185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.267508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.267523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.267691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.267706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.268033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.268048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.268351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.474 [2024-11-20 18:07:10.268366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.474 qpair failed and we were unable to recover it. 00:40:10.474 [2024-11-20 18:07:10.268695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.268709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.269013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.269027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.475 [2024-11-20 18:07:10.269364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.269379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.269709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:10.475 [2024-11-20 18:07:10.269723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.269907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.269921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.475 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:10.475 [2024-11-20 18:07:10.270344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.270359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.270686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.270701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.270976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.270993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.271175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.271190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.271517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.271532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.271735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.271750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.271947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.271962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.272268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.272283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.272574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.272588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.272916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.272930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.273258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.273273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.273592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.273606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.273884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.273898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.274221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.274236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.274533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.274547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.274714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.274728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.275103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.275118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.275302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.275317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.275601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.275616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.275794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.275809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.475 [2024-11-20 18:07:10.275982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.475 [2024-11-20 18:07:10.275997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.475 qpair failed and we were unable to recover it. 00:40:10.476 [2024-11-20 18:07:10.276207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.476 [2024-11-20 18:07:10.276222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd152d0 with addr=10.0.0.2, port=4420 00:40:10.476 qpair failed and we were unable to recover it. 00:40:10.476 [2024-11-20 18:07:10.276513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:10.476 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.476 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:10.476 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.476 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:10.476 [2024-11-20 18:07:10.287245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.476 [2024-11-20 18:07:10.287338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.476 [2024-11-20 18:07:10.287366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.476 [2024-11-20 18:07:10.287378] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.476 [2024-11-20 18:07:10.287388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.476 [2024-11-20 18:07:10.287415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.476 qpair failed and we were unable to recover it. 00:40:10.476 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.476 18:07:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2943575 00:40:10.476 [2024-11-20 18:07:10.297136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.476 [2024-11-20 18:07:10.297216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.476 [2024-11-20 18:07:10.297237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.476 [2024-11-20 18:07:10.297253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.476 [2024-11-20 18:07:10.297263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.476 [2024-11-20 18:07:10.297284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.476 qpair failed and we were unable to recover it. 00:40:10.476 [2024-11-20 18:07:10.307138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.476 [2024-11-20 18:07:10.307205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.476 [2024-11-20 18:07:10.307226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.476 [2024-11-20 18:07:10.307236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.476 [2024-11-20 18:07:10.307245] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.476 [2024-11-20 18:07:10.307266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.476 qpair failed and we were unable to recover it. 00:40:10.476 [2024-11-20 18:07:10.317192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.476 [2024-11-20 18:07:10.317258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.476 [2024-11-20 18:07:10.317275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.476 [2024-11-20 18:07:10.317284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.476 [2024-11-20 18:07:10.317291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.476 [2024-11-20 18:07:10.317308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.476 qpair failed and we were unable to recover it. 00:40:10.476 [2024-11-20 18:07:10.327128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.476 [2024-11-20 18:07:10.327190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.476 [2024-11-20 18:07:10.327204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.476 [2024-11-20 18:07:10.327211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.476 [2024-11-20 18:07:10.327217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.476 [2024-11-20 18:07:10.327231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.476 qpair failed and we were unable to recover it. 00:40:10.739 [2024-11-20 18:07:10.337131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.739 [2024-11-20 18:07:10.337188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.739 [2024-11-20 18:07:10.337203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.739 [2024-11-20 18:07:10.337210] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.739 [2024-11-20 18:07:10.337216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.739 [2024-11-20 18:07:10.337231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.739 qpair failed and we were unable to recover it. 00:40:10.739 [2024-11-20 18:07:10.347171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.739 [2024-11-20 18:07:10.347228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.739 [2024-11-20 18:07:10.347243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.739 [2024-11-20 18:07:10.347250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.739 [2024-11-20 18:07:10.347256] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.739 [2024-11-20 18:07:10.347271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.739 qpair failed and we were unable to recover it. 00:40:10.739 [2024-11-20 18:07:10.357196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.739 [2024-11-20 18:07:10.357259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.739 [2024-11-20 18:07:10.357273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.739 [2024-11-20 18:07:10.357280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.739 [2024-11-20 18:07:10.357286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.739 [2024-11-20 18:07:10.357300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.739 qpair failed and we were unable to recover it. 00:40:10.739 [2024-11-20 18:07:10.367249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.739 [2024-11-20 18:07:10.367325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.739 [2024-11-20 18:07:10.367340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.739 [2024-11-20 18:07:10.367347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.739 [2024-11-20 18:07:10.367353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.739 [2024-11-20 18:07:10.367368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.739 qpair failed and we were unable to recover it. 00:40:10.739 [2024-11-20 18:07:10.377246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.739 [2024-11-20 18:07:10.377303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.739 [2024-11-20 18:07:10.377318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.739 [2024-11-20 18:07:10.377325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.739 [2024-11-20 18:07:10.377331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.739 [2024-11-20 18:07:10.377345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.739 qpair failed and we were unable to recover it. 00:40:10.739 [2024-11-20 18:07:10.387302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.739 [2024-11-20 18:07:10.387359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.739 [2024-11-20 18:07:10.387374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.739 [2024-11-20 18:07:10.387385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.739 [2024-11-20 18:07:10.387392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.739 [2024-11-20 18:07:10.387406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.739 qpair failed and we were unable to recover it. 00:40:10.739 [2024-11-20 18:07:10.397270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.739 [2024-11-20 18:07:10.397327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.739 [2024-11-20 18:07:10.397341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.740 [2024-11-20 18:07:10.397349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.740 [2024-11-20 18:07:10.397355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.740 [2024-11-20 18:07:10.397369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.740 qpair failed and we were unable to recover it. 00:40:10.740 [2024-11-20 18:07:10.407351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.740 [2024-11-20 18:07:10.407414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.740 [2024-11-20 18:07:10.407431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.740 [2024-11-20 18:07:10.407439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.740 [2024-11-20 18:07:10.407445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.740 [2024-11-20 18:07:10.407464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.740 qpair failed and we were unable to recover it. 00:40:10.740 [2024-11-20 18:07:10.417344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.740 [2024-11-20 18:07:10.417405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.740 [2024-11-20 18:07:10.417422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.740 [2024-11-20 18:07:10.417429] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.740 [2024-11-20 18:07:10.417436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.740 [2024-11-20 18:07:10.417451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.740 qpair failed and we were unable to recover it. 00:40:10.740 [2024-11-20 18:07:10.427409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.740 [2024-11-20 18:07:10.427464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.740 [2024-11-20 18:07:10.427480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.740 [2024-11-20 18:07:10.427487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.740 [2024-11-20 18:07:10.427494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.740 [2024-11-20 18:07:10.427509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.740 qpair failed and we were unable to recover it. 00:40:10.740 [2024-11-20 18:07:10.437435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.740 [2024-11-20 18:07:10.437502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.740 [2024-11-20 18:07:10.437518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.740 [2024-11-20 18:07:10.437525] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.740 [2024-11-20 18:07:10.437531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.740 [2024-11-20 18:07:10.437546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.740 qpair failed and we were unable to recover it. 00:40:10.740 [2024-11-20 18:07:10.447503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.740 [2024-11-20 18:07:10.447578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.740 [2024-11-20 18:07:10.447594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.740 [2024-11-20 18:07:10.447601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.740 [2024-11-20 18:07:10.447607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.740 [2024-11-20 18:07:10.447622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.740 qpair failed and we were unable to recover it. 00:40:10.740 [2024-11-20 18:07:10.457510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.740 [2024-11-20 18:07:10.457594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.740 [2024-11-20 18:07:10.457610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.740 [2024-11-20 18:07:10.457617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.740 [2024-11-20 18:07:10.457624] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.740 [2024-11-20 18:07:10.457640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.740 qpair failed and we were unable to recover it. 00:40:10.740 [2024-11-20 18:07:10.467550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.740 [2024-11-20 18:07:10.467612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.740 [2024-11-20 18:07:10.467628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.740 [2024-11-20 18:07:10.467635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.740 [2024-11-20 18:07:10.467642] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.740 [2024-11-20 18:07:10.467658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.740 qpair failed and we were unable to recover it. 00:40:10.740 [2024-11-20 18:07:10.477603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.740 [2024-11-20 18:07:10.477676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.740 [2024-11-20 18:07:10.477699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.740 [2024-11-20 18:07:10.477707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.740 [2024-11-20 18:07:10.477713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.740 [2024-11-20 18:07:10.477729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.740 qpair failed and we were unable to recover it. 00:40:10.740 [2024-11-20 18:07:10.487647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.740 [2024-11-20 18:07:10.487724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.740 [2024-11-20 18:07:10.487742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.740 [2024-11-20 18:07:10.487750] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.740 [2024-11-20 18:07:10.487756] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.740 [2024-11-20 18:07:10.487773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.740 qpair failed and we were unable to recover it. 00:40:10.740 [2024-11-20 18:07:10.497562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.740 [2024-11-20 18:07:10.497626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.740 [2024-11-20 18:07:10.497646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.740 [2024-11-20 18:07:10.497654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.740 [2024-11-20 18:07:10.497661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.740 [2024-11-20 18:07:10.497678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.740 qpair failed and we were unable to recover it. 00:40:10.740 [2024-11-20 18:07:10.507624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.740 [2024-11-20 18:07:10.507691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.740 [2024-11-20 18:07:10.507713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.740 [2024-11-20 18:07:10.507721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.740 [2024-11-20 18:07:10.507727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.740 [2024-11-20 18:07:10.507746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.740 qpair failed and we were unable to recover it. 00:40:10.740 [2024-11-20 18:07:10.517837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.740 [2024-11-20 18:07:10.517917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.740 [2024-11-20 18:07:10.517935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.740 [2024-11-20 18:07:10.517942] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.740 [2024-11-20 18:07:10.517949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.740 [2024-11-20 18:07:10.517965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.740 qpair failed and we were unable to recover it. 00:40:10.740 [2024-11-20 18:07:10.527850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.740 [2024-11-20 18:07:10.527922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.740 [2024-11-20 18:07:10.527939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.740 [2024-11-20 18:07:10.527947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.741 [2024-11-20 18:07:10.527953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.741 [2024-11-20 18:07:10.527970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.741 qpair failed and we were unable to recover it. 00:40:10.741 [2024-11-20 18:07:10.537707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.741 [2024-11-20 18:07:10.537818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.741 [2024-11-20 18:07:10.537835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.741 [2024-11-20 18:07:10.537843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.741 [2024-11-20 18:07:10.537850] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.741 [2024-11-20 18:07:10.537866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.741 qpair failed and we were unable to recover it. 00:40:10.741 [2024-11-20 18:07:10.547845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.741 [2024-11-20 18:07:10.547968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.741 [2024-11-20 18:07:10.547986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.741 [2024-11-20 18:07:10.547993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.741 [2024-11-20 18:07:10.548000] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.741 [2024-11-20 18:07:10.548017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.741 qpair failed and we were unable to recover it. 00:40:10.741 [2024-11-20 18:07:10.557824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.741 [2024-11-20 18:07:10.557900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.741 [2024-11-20 18:07:10.557918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.741 [2024-11-20 18:07:10.557925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.741 [2024-11-20 18:07:10.557935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.741 [2024-11-20 18:07:10.557952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.741 qpair failed and we were unable to recover it. 00:40:10.741 [2024-11-20 18:07:10.567893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.741 [2024-11-20 18:07:10.567969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.741 [2024-11-20 18:07:10.567992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.741 [2024-11-20 18:07:10.567999] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.741 [2024-11-20 18:07:10.568005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.741 [2024-11-20 18:07:10.568021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.741 qpair failed and we were unable to recover it. 00:40:10.741 [2024-11-20 18:07:10.577870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.741 [2024-11-20 18:07:10.577937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.741 [2024-11-20 18:07:10.577954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.741 [2024-11-20 18:07:10.577961] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.741 [2024-11-20 18:07:10.577967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.741 [2024-11-20 18:07:10.577984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.741 qpair failed and we were unable to recover it. 00:40:10.741 [2024-11-20 18:07:10.587893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.741 [2024-11-20 18:07:10.587957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.741 [2024-11-20 18:07:10.587974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.741 [2024-11-20 18:07:10.587981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.741 [2024-11-20 18:07:10.587987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.741 [2024-11-20 18:07:10.588003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.741 qpair failed and we were unable to recover it. 00:40:10.741 [2024-11-20 18:07:10.597933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.741 [2024-11-20 18:07:10.598039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.741 [2024-11-20 18:07:10.598055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.741 [2024-11-20 18:07:10.598063] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.741 [2024-11-20 18:07:10.598070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.741 [2024-11-20 18:07:10.598086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.741 qpair failed and we were unable to recover it. 00:40:10.741 [2024-11-20 18:07:10.608008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.741 [2024-11-20 18:07:10.608083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.741 [2024-11-20 18:07:10.608099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.741 [2024-11-20 18:07:10.608107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.741 [2024-11-20 18:07:10.608113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.741 [2024-11-20 18:07:10.608130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.741 qpair failed and we were unable to recover it. 00:40:10.741 [2024-11-20 18:07:10.618012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.741 [2024-11-20 18:07:10.618084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.741 [2024-11-20 18:07:10.618101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.741 [2024-11-20 18:07:10.618108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.741 [2024-11-20 18:07:10.618115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.741 [2024-11-20 18:07:10.618131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.741 qpair failed and we were unable to recover it. 00:40:10.741 [2024-11-20 18:07:10.628025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.741 [2024-11-20 18:07:10.628096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.741 [2024-11-20 18:07:10.628113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.741 [2024-11-20 18:07:10.628120] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.741 [2024-11-20 18:07:10.628127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.741 [2024-11-20 18:07:10.628143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.741 qpair failed and we were unable to recover it. 00:40:10.741 [2024-11-20 18:07:10.638060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.741 [2024-11-20 18:07:10.638130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.741 [2024-11-20 18:07:10.638147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.741 [2024-11-20 18:07:10.638154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.741 [2024-11-20 18:07:10.638168] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.741 [2024-11-20 18:07:10.638184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.741 qpair failed and we were unable to recover it. 00:40:10.741 [2024-11-20 18:07:10.648132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.741 [2024-11-20 18:07:10.648217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.741 [2024-11-20 18:07:10.648234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.741 [2024-11-20 18:07:10.648242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.741 [2024-11-20 18:07:10.648248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:10.741 [2024-11-20 18:07:10.648265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.741 qpair failed and we were unable to recover it. 00:40:11.004 [2024-11-20 18:07:10.658005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.004 [2024-11-20 18:07:10.658070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.004 [2024-11-20 18:07:10.658092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.004 [2024-11-20 18:07:10.658099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.004 [2024-11-20 18:07:10.658105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.004 [2024-11-20 18:07:10.658121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.004 qpair failed and we were unable to recover it. 00:40:11.004 [2024-11-20 18:07:10.668165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.004 [2024-11-20 18:07:10.668233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.004 [2024-11-20 18:07:10.668251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.004 [2024-11-20 18:07:10.668258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.004 [2024-11-20 18:07:10.668264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.004 [2024-11-20 18:07:10.668281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.004 qpair failed and we were unable to recover it. 00:40:11.004 [2024-11-20 18:07:10.678228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.004 [2024-11-20 18:07:10.678346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.004 [2024-11-20 18:07:10.678364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.004 [2024-11-20 18:07:10.678372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.004 [2024-11-20 18:07:10.678378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.004 [2024-11-20 18:07:10.678394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.004 qpair failed and we were unable to recover it. 00:40:11.004 [2024-11-20 18:07:10.688131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.004 [2024-11-20 18:07:10.688207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.004 [2024-11-20 18:07:10.688230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.004 [2024-11-20 18:07:10.688238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.004 [2024-11-20 18:07:10.688245] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.004 [2024-11-20 18:07:10.688263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.004 qpair failed and we were unable to recover it. 00:40:11.004 [2024-11-20 18:07:10.698243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.004 [2024-11-20 18:07:10.698300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.004 [2024-11-20 18:07:10.698318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.004 [2024-11-20 18:07:10.698325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.004 [2024-11-20 18:07:10.698331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.004 [2024-11-20 18:07:10.698362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.004 qpair failed and we were unable to recover it. 00:40:11.004 [2024-11-20 18:07:10.708283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.004 [2024-11-20 18:07:10.708347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.004 [2024-11-20 18:07:10.708364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.004 [2024-11-20 18:07:10.708372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.004 [2024-11-20 18:07:10.708378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.004 [2024-11-20 18:07:10.708396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.004 qpair failed and we were unable to recover it. 00:40:11.004 [2024-11-20 18:07:10.718298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.004 [2024-11-20 18:07:10.718366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.004 [2024-11-20 18:07:10.718383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.004 [2024-11-20 18:07:10.718390] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.004 [2024-11-20 18:07:10.718396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.004 [2024-11-20 18:07:10.718413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.004 qpair failed and we were unable to recover it. 00:40:11.004 [2024-11-20 18:07:10.728335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.004 [2024-11-20 18:07:10.728405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.004 [2024-11-20 18:07:10.728422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.004 [2024-11-20 18:07:10.728429] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.004 [2024-11-20 18:07:10.728437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.004 [2024-11-20 18:07:10.728453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.004 qpair failed and we were unable to recover it. 00:40:11.004 [2024-11-20 18:07:10.738314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.004 [2024-11-20 18:07:10.738380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.004 [2024-11-20 18:07:10.738395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.004 [2024-11-20 18:07:10.738403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.004 [2024-11-20 18:07:10.738409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.004 [2024-11-20 18:07:10.738425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.004 qpair failed and we were unable to recover it. 00:40:11.004 [2024-11-20 18:07:10.748410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.004 [2024-11-20 18:07:10.748512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.005 [2024-11-20 18:07:10.748535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.005 [2024-11-20 18:07:10.748542] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.005 [2024-11-20 18:07:10.748548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.005 [2024-11-20 18:07:10.748565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.005 qpair failed and we were unable to recover it. 00:40:11.005 [2024-11-20 18:07:10.758432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.005 [2024-11-20 18:07:10.758512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.005 [2024-11-20 18:07:10.758529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.005 [2024-11-20 18:07:10.758537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.005 [2024-11-20 18:07:10.758543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.005 [2024-11-20 18:07:10.758559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.005 qpair failed and we were unable to recover it. 00:40:11.005 [2024-11-20 18:07:10.768482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.005 [2024-11-20 18:07:10.768556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.005 [2024-11-20 18:07:10.768573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.005 [2024-11-20 18:07:10.768581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.005 [2024-11-20 18:07:10.768587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.005 [2024-11-20 18:07:10.768604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.005 qpair failed and we were unable to recover it. 00:40:11.005 [2024-11-20 18:07:10.778500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.005 [2024-11-20 18:07:10.778577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.005 [2024-11-20 18:07:10.778594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.005 [2024-11-20 18:07:10.778601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.005 [2024-11-20 18:07:10.778608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.005 [2024-11-20 18:07:10.778624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.005 qpair failed and we were unable to recover it. 00:40:11.005 [2024-11-20 18:07:10.788522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.005 [2024-11-20 18:07:10.788590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.005 [2024-11-20 18:07:10.788606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.005 [2024-11-20 18:07:10.788614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.005 [2024-11-20 18:07:10.788621] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.005 [2024-11-20 18:07:10.788642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.005 qpair failed and we were unable to recover it. 00:40:11.005 [2024-11-20 18:07:10.798568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.005 [2024-11-20 18:07:10.798639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.005 [2024-11-20 18:07:10.798656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.005 [2024-11-20 18:07:10.798663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.005 [2024-11-20 18:07:10.798670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.005 [2024-11-20 18:07:10.798686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.005 qpair failed and we were unable to recover it. 00:40:11.005 [2024-11-20 18:07:10.808634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.005 [2024-11-20 18:07:10.808755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.005 [2024-11-20 18:07:10.808773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.005 [2024-11-20 18:07:10.808780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.005 [2024-11-20 18:07:10.808786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.005 [2024-11-20 18:07:10.808803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.005 qpair failed and we were unable to recover it. 00:40:11.005 [2024-11-20 18:07:10.818577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.005 [2024-11-20 18:07:10.818645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.005 [2024-11-20 18:07:10.818664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.005 [2024-11-20 18:07:10.818672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.005 [2024-11-20 18:07:10.818678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.005 [2024-11-20 18:07:10.818694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.005 qpair failed and we were unable to recover it. 00:40:11.005 [2024-11-20 18:07:10.828628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.005 [2024-11-20 18:07:10.828696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.005 [2024-11-20 18:07:10.828712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.005 [2024-11-20 18:07:10.828720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.005 [2024-11-20 18:07:10.828726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.005 [2024-11-20 18:07:10.828743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.005 qpair failed and we were unable to recover it. 00:40:11.005 [2024-11-20 18:07:10.838669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.005 [2024-11-20 18:07:10.838741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.005 [2024-11-20 18:07:10.838762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.005 [2024-11-20 18:07:10.838769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.005 [2024-11-20 18:07:10.838776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.005 [2024-11-20 18:07:10.838792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.005 qpair failed and we were unable to recover it. 00:40:11.005 [2024-11-20 18:07:10.848684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.005 [2024-11-20 18:07:10.848773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.005 [2024-11-20 18:07:10.848789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.005 [2024-11-20 18:07:10.848796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.005 [2024-11-20 18:07:10.848803] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.005 [2024-11-20 18:07:10.848820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.005 qpair failed and we were unable to recover it. 00:40:11.005 [2024-11-20 18:07:10.858724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.005 [2024-11-20 18:07:10.858793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.005 [2024-11-20 18:07:10.858810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.005 [2024-11-20 18:07:10.858817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.005 [2024-11-20 18:07:10.858823] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.005 [2024-11-20 18:07:10.858839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.005 qpair failed and we were unable to recover it. 00:40:11.005 [2024-11-20 18:07:10.868767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.005 [2024-11-20 18:07:10.868833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.005 [2024-11-20 18:07:10.868850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.005 [2024-11-20 18:07:10.868857] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.005 [2024-11-20 18:07:10.868863] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.005 [2024-11-20 18:07:10.868881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.005 qpair failed and we were unable to recover it. 00:40:11.005 [2024-11-20 18:07:10.878784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.005 [2024-11-20 18:07:10.878854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.005 [2024-11-20 18:07:10.878882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.005 [2024-11-20 18:07:10.878890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.006 [2024-11-20 18:07:10.878896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.006 [2024-11-20 18:07:10.878922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.006 qpair failed and we were unable to recover it. 00:40:11.006 [2024-11-20 18:07:10.888826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.006 [2024-11-20 18:07:10.888914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.006 [2024-11-20 18:07:10.888953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.006 [2024-11-20 18:07:10.888962] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.006 [2024-11-20 18:07:10.888969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.006 [2024-11-20 18:07:10.888994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.006 qpair failed and we were unable to recover it. 00:40:11.006 [2024-11-20 18:07:10.898857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.006 [2024-11-20 18:07:10.898926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.006 [2024-11-20 18:07:10.898965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.006 [2024-11-20 18:07:10.898974] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.006 [2024-11-20 18:07:10.898981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.006 [2024-11-20 18:07:10.899005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.006 qpair failed and we were unable to recover it. 00:40:11.006 [2024-11-20 18:07:10.908854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.006 [2024-11-20 18:07:10.908951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.006 [2024-11-20 18:07:10.908972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.006 [2024-11-20 18:07:10.908979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.006 [2024-11-20 18:07:10.908986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.006 [2024-11-20 18:07:10.909004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.006 qpair failed and we were unable to recover it. 00:40:11.269 [2024-11-20 18:07:10.918797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.269 [2024-11-20 18:07:10.918868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.269 [2024-11-20 18:07:10.918886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.269 [2024-11-20 18:07:10.918893] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.269 [2024-11-20 18:07:10.918900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.269 [2024-11-20 18:07:10.918917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.269 qpair failed and we were unable to recover it. 00:40:11.269 [2024-11-20 18:07:10.928939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.269 [2024-11-20 18:07:10.929016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.269 [2024-11-20 18:07:10.929040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.269 [2024-11-20 18:07:10.929047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.269 [2024-11-20 18:07:10.929054] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.269 [2024-11-20 18:07:10.929071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.269 qpair failed and we were unable to recover it. 00:40:11.269 [2024-11-20 18:07:10.938954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.269 [2024-11-20 18:07:10.939015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.269 [2024-11-20 18:07:10.939033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.269 [2024-11-20 18:07:10.939040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.269 [2024-11-20 18:07:10.939046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.269 [2024-11-20 18:07:10.939063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.269 qpair failed and we were unable to recover it. 00:40:11.269 [2024-11-20 18:07:10.949021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.269 [2024-11-20 18:07:10.949086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.269 [2024-11-20 18:07:10.949103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.269 [2024-11-20 18:07:10.949110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.269 [2024-11-20 18:07:10.949116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.269 [2024-11-20 18:07:10.949133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.269 qpair failed and we were unable to recover it. 00:40:11.269 [2024-11-20 18:07:10.959055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.269 [2024-11-20 18:07:10.959127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.269 [2024-11-20 18:07:10.959144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.269 [2024-11-20 18:07:10.959151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.269 [2024-11-20 18:07:10.959163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.269 [2024-11-20 18:07:10.959181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.269 qpair failed and we were unable to recover it. 00:40:11.269 [2024-11-20 18:07:10.969083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.269 [2024-11-20 18:07:10.969175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.269 [2024-11-20 18:07:10.969194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.269 [2024-11-20 18:07:10.969201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.269 [2024-11-20 18:07:10.969207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.269 [2024-11-20 18:07:10.969231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.269 qpair failed and we were unable to recover it. 00:40:11.269 [2024-11-20 18:07:10.979066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.269 [2024-11-20 18:07:10.979135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.269 [2024-11-20 18:07:10.979152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.269 [2024-11-20 18:07:10.979165] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.269 [2024-11-20 18:07:10.979173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.269 [2024-11-20 18:07:10.979189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.269 qpair failed and we were unable to recover it. 00:40:11.269 [2024-11-20 18:07:10.989107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.269 [2024-11-20 18:07:10.989180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.269 [2024-11-20 18:07:10.989198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.269 [2024-11-20 18:07:10.989205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.269 [2024-11-20 18:07:10.989211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.269 [2024-11-20 18:07:10.989229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.269 qpair failed and we were unable to recover it. 00:40:11.269 [2024-11-20 18:07:10.999140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.269 [2024-11-20 18:07:10.999223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.269 [2024-11-20 18:07:10.999241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.269 [2024-11-20 18:07:10.999251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.269 [2024-11-20 18:07:10.999258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.269 [2024-11-20 18:07:10.999275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.269 qpair failed and we were unable to recover it. 00:40:11.269 [2024-11-20 18:07:11.009184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.269 [2024-11-20 18:07:11.009264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.269 [2024-11-20 18:07:11.009281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.269 [2024-11-20 18:07:11.009288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.269 [2024-11-20 18:07:11.009295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.269 [2024-11-20 18:07:11.009312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.269 qpair failed and we were unable to recover it. 00:40:11.269 [2024-11-20 18:07:11.019194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.270 [2024-11-20 18:07:11.019255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.270 [2024-11-20 18:07:11.019278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.270 [2024-11-20 18:07:11.019285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.270 [2024-11-20 18:07:11.019292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.270 [2024-11-20 18:07:11.019308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.270 qpair failed and we were unable to recover it. 00:40:11.270 [2024-11-20 18:07:11.029225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.270 [2024-11-20 18:07:11.029294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.270 [2024-11-20 18:07:11.029311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.270 [2024-11-20 18:07:11.029318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.270 [2024-11-20 18:07:11.029324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.270 [2024-11-20 18:07:11.029341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.270 qpair failed and we were unable to recover it. 00:40:11.270 [2024-11-20 18:07:11.039262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.270 [2024-11-20 18:07:11.039328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.270 [2024-11-20 18:07:11.039345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.270 [2024-11-20 18:07:11.039352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.270 [2024-11-20 18:07:11.039358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.270 [2024-11-20 18:07:11.039375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.270 qpair failed and we were unable to recover it. 00:40:11.270 [2024-11-20 18:07:11.049371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.270 [2024-11-20 18:07:11.049477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.270 [2024-11-20 18:07:11.049494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.270 [2024-11-20 18:07:11.049502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.270 [2024-11-20 18:07:11.049509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.270 [2024-11-20 18:07:11.049526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.270 qpair failed and we were unable to recover it. 00:40:11.270 [2024-11-20 18:07:11.059325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.270 [2024-11-20 18:07:11.059386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.270 [2024-11-20 18:07:11.059402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.270 [2024-11-20 18:07:11.059410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.270 [2024-11-20 18:07:11.059422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.270 [2024-11-20 18:07:11.059439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.270 qpair failed and we were unable to recover it. 00:40:11.270 [2024-11-20 18:07:11.069317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.270 [2024-11-20 18:07:11.069387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.270 [2024-11-20 18:07:11.069405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.270 [2024-11-20 18:07:11.069412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.270 [2024-11-20 18:07:11.069419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.270 [2024-11-20 18:07:11.069436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.270 qpair failed and we were unable to recover it. 00:40:11.270 [2024-11-20 18:07:11.079382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.270 [2024-11-20 18:07:11.079445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.270 [2024-11-20 18:07:11.079461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.270 [2024-11-20 18:07:11.079468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.270 [2024-11-20 18:07:11.079475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.270 [2024-11-20 18:07:11.079490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.270 qpair failed and we were unable to recover it. 00:40:11.270 [2024-11-20 18:07:11.089438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.270 [2024-11-20 18:07:11.089531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.270 [2024-11-20 18:07:11.089548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.270 [2024-11-20 18:07:11.089555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.270 [2024-11-20 18:07:11.089562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.270 [2024-11-20 18:07:11.089578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.270 qpair failed and we were unable to recover it. 00:40:11.270 [2024-11-20 18:07:11.099442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.270 [2024-11-20 18:07:11.099508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.270 [2024-11-20 18:07:11.099525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.270 [2024-11-20 18:07:11.099532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.270 [2024-11-20 18:07:11.099538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.270 [2024-11-20 18:07:11.099555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.270 qpair failed and we were unable to recover it. 00:40:11.270 [2024-11-20 18:07:11.109477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.270 [2024-11-20 18:07:11.109550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.270 [2024-11-20 18:07:11.109571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.270 [2024-11-20 18:07:11.109578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.270 [2024-11-20 18:07:11.109584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.270 [2024-11-20 18:07:11.109601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.270 qpair failed and we were unable to recover it. 00:40:11.270 [2024-11-20 18:07:11.119482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.270 [2024-11-20 18:07:11.119549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.270 [2024-11-20 18:07:11.119565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.270 [2024-11-20 18:07:11.119572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.270 [2024-11-20 18:07:11.119578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.270 [2024-11-20 18:07:11.119595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.270 qpair failed and we were unable to recover it. 00:40:11.270 [2024-11-20 18:07:11.129568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.270 [2024-11-20 18:07:11.129636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.270 [2024-11-20 18:07:11.129652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.270 [2024-11-20 18:07:11.129660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.270 [2024-11-20 18:07:11.129666] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.270 [2024-11-20 18:07:11.129683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.270 qpair failed and we were unable to recover it. 00:40:11.270 [2024-11-20 18:07:11.139506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.270 [2024-11-20 18:07:11.139569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.270 [2024-11-20 18:07:11.139585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.270 [2024-11-20 18:07:11.139592] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.270 [2024-11-20 18:07:11.139599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.270 [2024-11-20 18:07:11.139615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.270 qpair failed and we were unable to recover it. 00:40:11.270 [2024-11-20 18:07:11.149567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.270 [2024-11-20 18:07:11.149628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.270 [2024-11-20 18:07:11.149644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.271 [2024-11-20 18:07:11.149651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.271 [2024-11-20 18:07:11.149664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.271 [2024-11-20 18:07:11.149680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.271 qpair failed and we were unable to recover it. 00:40:11.271 [2024-11-20 18:07:11.159610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.271 [2024-11-20 18:07:11.159679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.271 [2024-11-20 18:07:11.159696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.271 [2024-11-20 18:07:11.159703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.271 [2024-11-20 18:07:11.159710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.271 [2024-11-20 18:07:11.159725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.271 qpair failed and we were unable to recover it. 00:40:11.271 [2024-11-20 18:07:11.169680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.271 [2024-11-20 18:07:11.169757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.271 [2024-11-20 18:07:11.169776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.271 [2024-11-20 18:07:11.169784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.271 [2024-11-20 18:07:11.169790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.271 [2024-11-20 18:07:11.169808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.271 qpair failed and we were unable to recover it. 00:40:11.271 [2024-11-20 18:07:11.179661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.271 [2024-11-20 18:07:11.179740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.271 [2024-11-20 18:07:11.179757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.271 [2024-11-20 18:07:11.179765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.271 [2024-11-20 18:07:11.179771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.271 [2024-11-20 18:07:11.179789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.271 qpair failed and we were unable to recover it. 00:40:11.534 [2024-11-20 18:07:11.189672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.534 [2024-11-20 18:07:11.189735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.534 [2024-11-20 18:07:11.189752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.534 [2024-11-20 18:07:11.189759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.534 [2024-11-20 18:07:11.189765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.534 [2024-11-20 18:07:11.189782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.534 qpair failed and we were unable to recover it. 00:40:11.534 [2024-11-20 18:07:11.199737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.534 [2024-11-20 18:07:11.199817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.534 [2024-11-20 18:07:11.199835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.534 [2024-11-20 18:07:11.199842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.534 [2024-11-20 18:07:11.199848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.534 [2024-11-20 18:07:11.199864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.534 qpair failed and we were unable to recover it. 00:40:11.534 [2024-11-20 18:07:11.209801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.534 [2024-11-20 18:07:11.209875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.534 [2024-11-20 18:07:11.209892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.534 [2024-11-20 18:07:11.209900] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.534 [2024-11-20 18:07:11.209906] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.534 [2024-11-20 18:07:11.209923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.534 qpair failed and we were unable to recover it. 00:40:11.534 [2024-11-20 18:07:11.219656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.534 [2024-11-20 18:07:11.219719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.534 [2024-11-20 18:07:11.219736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.534 [2024-11-20 18:07:11.219743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.534 [2024-11-20 18:07:11.219749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.534 [2024-11-20 18:07:11.219766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.534 qpair failed and we were unable to recover it. 00:40:11.534 [2024-11-20 18:07:11.229832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.534 [2024-11-20 18:07:11.229916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.534 [2024-11-20 18:07:11.229933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.534 [2024-11-20 18:07:11.229940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.534 [2024-11-20 18:07:11.229947] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.534 [2024-11-20 18:07:11.229965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.534 qpair failed and we were unable to recover it. 00:40:11.534 [2024-11-20 18:07:11.239747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.534 [2024-11-20 18:07:11.239811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.534 [2024-11-20 18:07:11.239830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.534 [2024-11-20 18:07:11.239837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.534 [2024-11-20 18:07:11.239849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.534 [2024-11-20 18:07:11.239866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.534 qpair failed and we were unable to recover it. 00:40:11.534 [2024-11-20 18:07:11.249860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.534 [2024-11-20 18:07:11.249936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.534 [2024-11-20 18:07:11.249964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.534 [2024-11-20 18:07:11.249971] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.534 [2024-11-20 18:07:11.249977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.534 [2024-11-20 18:07:11.249997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.534 qpair failed and we were unable to recover it. 00:40:11.534 [2024-11-20 18:07:11.259895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.534 [2024-11-20 18:07:11.259965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.534 [2024-11-20 18:07:11.260004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.535 [2024-11-20 18:07:11.260013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.535 [2024-11-20 18:07:11.260020] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.535 [2024-11-20 18:07:11.260045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.535 qpair failed and we were unable to recover it. 00:40:11.535 [2024-11-20 18:07:11.269918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.535 [2024-11-20 18:07:11.269989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.535 [2024-11-20 18:07:11.270027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.535 [2024-11-20 18:07:11.270038] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.535 [2024-11-20 18:07:11.270045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.535 [2024-11-20 18:07:11.270070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.535 qpair failed and we were unable to recover it. 00:40:11.535 [2024-11-20 18:07:11.279973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.535 [2024-11-20 18:07:11.280042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.535 [2024-11-20 18:07:11.280062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.535 [2024-11-20 18:07:11.280069] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.535 [2024-11-20 18:07:11.280076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.535 [2024-11-20 18:07:11.280094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.535 qpair failed and we were unable to recover it. 00:40:11.535 [2024-11-20 18:07:11.290023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.535 [2024-11-20 18:07:11.290117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.535 [2024-11-20 18:07:11.290136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.535 [2024-11-20 18:07:11.290143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.535 [2024-11-20 18:07:11.290150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.535 [2024-11-20 18:07:11.290175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.535 qpair failed and we were unable to recover it. 00:40:11.535 [2024-11-20 18:07:11.299985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.535 [2024-11-20 18:07:11.300043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.535 [2024-11-20 18:07:11.300057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.535 [2024-11-20 18:07:11.300064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.535 [2024-11-20 18:07:11.300070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.535 [2024-11-20 18:07:11.300085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.535 qpair failed and we were unable to recover it. 00:40:11.535 [2024-11-20 18:07:11.310010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.535 [2024-11-20 18:07:11.310069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.535 [2024-11-20 18:07:11.310084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.535 [2024-11-20 18:07:11.310091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.535 [2024-11-20 18:07:11.310097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.535 [2024-11-20 18:07:11.310111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.535 qpair failed and we were unable to recover it. 00:40:11.535 [2024-11-20 18:07:11.320033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.535 [2024-11-20 18:07:11.320089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.535 [2024-11-20 18:07:11.320104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.535 [2024-11-20 18:07:11.320111] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.535 [2024-11-20 18:07:11.320117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.535 [2024-11-20 18:07:11.320131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.535 qpair failed and we were unable to recover it. 00:40:11.535 [2024-11-20 18:07:11.330047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.535 [2024-11-20 18:07:11.330102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.535 [2024-11-20 18:07:11.330115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.535 [2024-11-20 18:07:11.330122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.535 [2024-11-20 18:07:11.330133] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.535 [2024-11-20 18:07:11.330147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.535 qpair failed and we were unable to recover it. 00:40:11.535 [2024-11-20 18:07:11.340082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.535 [2024-11-20 18:07:11.340139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.535 [2024-11-20 18:07:11.340152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.535 [2024-11-20 18:07:11.340163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.535 [2024-11-20 18:07:11.340169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.535 [2024-11-20 18:07:11.340184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.535 qpair failed and we were unable to recover it. 00:40:11.535 [2024-11-20 18:07:11.350111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.535 [2024-11-20 18:07:11.350167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.535 [2024-11-20 18:07:11.350180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.535 [2024-11-20 18:07:11.350187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.535 [2024-11-20 18:07:11.350194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.535 [2024-11-20 18:07:11.350208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.535 qpair failed and we were unable to recover it. 00:40:11.535 [2024-11-20 18:07:11.360166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.535 [2024-11-20 18:07:11.360228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.535 [2024-11-20 18:07:11.360241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.535 [2024-11-20 18:07:11.360248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.535 [2024-11-20 18:07:11.360254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.535 [2024-11-20 18:07:11.360268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.535 qpair failed and we were unable to recover it. 00:40:11.535 [2024-11-20 18:07:11.370173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.535 [2024-11-20 18:07:11.370257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.535 [2024-11-20 18:07:11.370270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.535 [2024-11-20 18:07:11.370277] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.535 [2024-11-20 18:07:11.370283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.535 [2024-11-20 18:07:11.370297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.535 qpair failed and we were unable to recover it. 00:40:11.535 [2024-11-20 18:07:11.380200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.535 [2024-11-20 18:07:11.380253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.535 [2024-11-20 18:07:11.380266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.535 [2024-11-20 18:07:11.380273] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.535 [2024-11-20 18:07:11.380279] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.535 [2024-11-20 18:07:11.380293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.535 qpair failed and we were unable to recover it. 00:40:11.535 [2024-11-20 18:07:11.390225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.535 [2024-11-20 18:07:11.390279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.535 [2024-11-20 18:07:11.390292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.535 [2024-11-20 18:07:11.390299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.535 [2024-11-20 18:07:11.390305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.536 [2024-11-20 18:07:11.390318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.536 qpair failed and we were unable to recover it. 00:40:11.536 [2024-11-20 18:07:11.400234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.536 [2024-11-20 18:07:11.400292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.536 [2024-11-20 18:07:11.400304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.536 [2024-11-20 18:07:11.400311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.536 [2024-11-20 18:07:11.400317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.536 [2024-11-20 18:07:11.400331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.536 qpair failed and we were unable to recover it. 00:40:11.536 [2024-11-20 18:07:11.410227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.536 [2024-11-20 18:07:11.410282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.536 [2024-11-20 18:07:11.410295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.536 [2024-11-20 18:07:11.410301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.536 [2024-11-20 18:07:11.410308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.536 [2024-11-20 18:07:11.410321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.536 qpair failed and we were unable to recover it. 00:40:11.536 [2024-11-20 18:07:11.420247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.536 [2024-11-20 18:07:11.420300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.536 [2024-11-20 18:07:11.420313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.536 [2024-11-20 18:07:11.420319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.536 [2024-11-20 18:07:11.420329] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.536 [2024-11-20 18:07:11.420343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.536 qpair failed and we were unable to recover it. 00:40:11.536 [2024-11-20 18:07:11.430332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.536 [2024-11-20 18:07:11.430399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.536 [2024-11-20 18:07:11.430412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.536 [2024-11-20 18:07:11.430418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.536 [2024-11-20 18:07:11.430424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.536 [2024-11-20 18:07:11.430438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.536 qpair failed and we were unable to recover it. 00:40:11.536 [2024-11-20 18:07:11.440386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.536 [2024-11-20 18:07:11.440450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.536 [2024-11-20 18:07:11.440464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.536 [2024-11-20 18:07:11.440471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.536 [2024-11-20 18:07:11.440477] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.536 [2024-11-20 18:07:11.440494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.536 qpair failed and we were unable to recover it. 00:40:11.798 [2024-11-20 18:07:11.450407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.798 [2024-11-20 18:07:11.450497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.798 [2024-11-20 18:07:11.450510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.798 [2024-11-20 18:07:11.450517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.799 [2024-11-20 18:07:11.450523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.799 [2024-11-20 18:07:11.450537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.799 qpair failed and we were unable to recover it. 00:40:11.799 [2024-11-20 18:07:11.460400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.799 [2024-11-20 18:07:11.460448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.799 [2024-11-20 18:07:11.460461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.799 [2024-11-20 18:07:11.460468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.799 [2024-11-20 18:07:11.460474] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.799 [2024-11-20 18:07:11.460488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.799 qpair failed and we were unable to recover it. 00:40:11.799 [2024-11-20 18:07:11.470446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.799 [2024-11-20 18:07:11.470500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.799 [2024-11-20 18:07:11.470513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.799 [2024-11-20 18:07:11.470520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.799 [2024-11-20 18:07:11.470526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.799 [2024-11-20 18:07:11.470540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.799 qpair failed and we were unable to recover it. 00:40:11.799 [2024-11-20 18:07:11.480479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.799 [2024-11-20 18:07:11.480533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.799 [2024-11-20 18:07:11.480546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.799 [2024-11-20 18:07:11.480552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.799 [2024-11-20 18:07:11.480559] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.799 [2024-11-20 18:07:11.480572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.799 qpair failed and we were unable to recover it. 00:40:11.799 [2024-11-20 18:07:11.490536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.799 [2024-11-20 18:07:11.490638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.799 [2024-11-20 18:07:11.490651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.799 [2024-11-20 18:07:11.490658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.799 [2024-11-20 18:07:11.490665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.799 [2024-11-20 18:07:11.490678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.799 qpair failed and we were unable to recover it. 00:40:11.799 [2024-11-20 18:07:11.500492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.799 [2024-11-20 18:07:11.500545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.799 [2024-11-20 18:07:11.500558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.799 [2024-11-20 18:07:11.500565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.799 [2024-11-20 18:07:11.500571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.799 [2024-11-20 18:07:11.500585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.799 qpair failed and we were unable to recover it. 00:40:11.799 [2024-11-20 18:07:11.510562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.799 [2024-11-20 18:07:11.510612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.799 [2024-11-20 18:07:11.510627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.799 [2024-11-20 18:07:11.510634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.799 [2024-11-20 18:07:11.510644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.799 [2024-11-20 18:07:11.510658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.799 qpair failed and we were unable to recover it. 00:40:11.799 [2024-11-20 18:07:11.520596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.799 [2024-11-20 18:07:11.520695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.799 [2024-11-20 18:07:11.520708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.799 [2024-11-20 18:07:11.520715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.799 [2024-11-20 18:07:11.520721] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.799 [2024-11-20 18:07:11.520735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.799 qpair failed and we were unable to recover it. 00:40:11.799 [2024-11-20 18:07:11.530642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.799 [2024-11-20 18:07:11.530701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.799 [2024-11-20 18:07:11.530715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.799 [2024-11-20 18:07:11.530721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.799 [2024-11-20 18:07:11.530728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.799 [2024-11-20 18:07:11.530741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.799 qpair failed and we were unable to recover it. 00:40:11.799 [2024-11-20 18:07:11.540645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.799 [2024-11-20 18:07:11.540722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.799 [2024-11-20 18:07:11.540735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.799 [2024-11-20 18:07:11.540741] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.799 [2024-11-20 18:07:11.540748] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.799 [2024-11-20 18:07:11.540762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.799 qpair failed and we were unable to recover it. 00:40:11.799 [2024-11-20 18:07:11.550641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.799 [2024-11-20 18:07:11.550690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.799 [2024-11-20 18:07:11.550702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.799 [2024-11-20 18:07:11.550709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.799 [2024-11-20 18:07:11.550716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.799 [2024-11-20 18:07:11.550729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.799 qpair failed and we were unable to recover it. 00:40:11.799 [2024-11-20 18:07:11.560697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.799 [2024-11-20 18:07:11.560753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.799 [2024-11-20 18:07:11.560766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.799 [2024-11-20 18:07:11.560773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.799 [2024-11-20 18:07:11.560779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.799 [2024-11-20 18:07:11.560792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.799 qpair failed and we were unable to recover it. 00:40:11.799 [2024-11-20 18:07:11.570716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.799 [2024-11-20 18:07:11.570768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.799 [2024-11-20 18:07:11.570781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.799 [2024-11-20 18:07:11.570788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.799 [2024-11-20 18:07:11.570794] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.799 [2024-11-20 18:07:11.570807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.799 qpair failed and we were unable to recover it. 00:40:11.799 [2024-11-20 18:07:11.580740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.799 [2024-11-20 18:07:11.580791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.799 [2024-11-20 18:07:11.580805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.799 [2024-11-20 18:07:11.580812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.799 [2024-11-20 18:07:11.580818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.800 [2024-11-20 18:07:11.580835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.800 qpair failed and we were unable to recover it. 00:40:11.800 [2024-11-20 18:07:11.590784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.800 [2024-11-20 18:07:11.590836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.800 [2024-11-20 18:07:11.590850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.800 [2024-11-20 18:07:11.590857] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.800 [2024-11-20 18:07:11.590863] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.800 [2024-11-20 18:07:11.590877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.800 qpair failed and we were unable to recover it. 00:40:11.800 [2024-11-20 18:07:11.600726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.800 [2024-11-20 18:07:11.600782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.800 [2024-11-20 18:07:11.600796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.800 [2024-11-20 18:07:11.600807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.800 [2024-11-20 18:07:11.600813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.800 [2024-11-20 18:07:11.600827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.800 qpair failed and we were unable to recover it. 00:40:11.800 [2024-11-20 18:07:11.610842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.800 [2024-11-20 18:07:11.610895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.800 [2024-11-20 18:07:11.610909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.800 [2024-11-20 18:07:11.610916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.800 [2024-11-20 18:07:11.610923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.800 [2024-11-20 18:07:11.610936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.800 qpair failed and we were unable to recover it. 00:40:11.800 [2024-11-20 18:07:11.620857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.800 [2024-11-20 18:07:11.620911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.800 [2024-11-20 18:07:11.620924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.800 [2024-11-20 18:07:11.620931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.800 [2024-11-20 18:07:11.620937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.800 [2024-11-20 18:07:11.620950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.800 qpair failed and we were unable to recover it. 00:40:11.800 [2024-11-20 18:07:11.630875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.800 [2024-11-20 18:07:11.630939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.800 [2024-11-20 18:07:11.630964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.800 [2024-11-20 18:07:11.630973] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.800 [2024-11-20 18:07:11.630980] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.800 [2024-11-20 18:07:11.630999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.800 qpair failed and we were unable to recover it. 00:40:11.800 [2024-11-20 18:07:11.640885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.800 [2024-11-20 18:07:11.640949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.800 [2024-11-20 18:07:11.640975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.800 [2024-11-20 18:07:11.640983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.800 [2024-11-20 18:07:11.640990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.800 [2024-11-20 18:07:11.641009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.800 qpair failed and we were unable to recover it. 00:40:11.800 [2024-11-20 18:07:11.650931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.800 [2024-11-20 18:07:11.651012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.800 [2024-11-20 18:07:11.651029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.800 [2024-11-20 18:07:11.651036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.800 [2024-11-20 18:07:11.651043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.800 [2024-11-20 18:07:11.651057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.800 qpair failed and we were unable to recover it. 00:40:11.800 [2024-11-20 18:07:11.660974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.800 [2024-11-20 18:07:11.661024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.800 [2024-11-20 18:07:11.661037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.800 [2024-11-20 18:07:11.661044] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.800 [2024-11-20 18:07:11.661050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.800 [2024-11-20 18:07:11.661064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.800 qpair failed and we were unable to recover it. 00:40:11.800 [2024-11-20 18:07:11.670967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.800 [2024-11-20 18:07:11.671025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.800 [2024-11-20 18:07:11.671038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.800 [2024-11-20 18:07:11.671045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.800 [2024-11-20 18:07:11.671051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.800 [2024-11-20 18:07:11.671065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.800 qpair failed and we were unable to recover it. 00:40:11.800 [2024-11-20 18:07:11.680990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.800 [2024-11-20 18:07:11.681059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.800 [2024-11-20 18:07:11.681072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.800 [2024-11-20 18:07:11.681079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.800 [2024-11-20 18:07:11.681085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.800 [2024-11-20 18:07:11.681099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.800 qpair failed and we were unable to recover it. 00:40:11.800 [2024-11-20 18:07:11.691053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.800 [2024-11-20 18:07:11.691109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.800 [2024-11-20 18:07:11.691122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.800 [2024-11-20 18:07:11.691132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.800 [2024-11-20 18:07:11.691139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.800 [2024-11-20 18:07:11.691152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.800 qpair failed and we were unable to recover it. 00:40:11.800 [2024-11-20 18:07:11.701069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.800 [2024-11-20 18:07:11.701116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.800 [2024-11-20 18:07:11.701129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.800 [2024-11-20 18:07:11.701136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.800 [2024-11-20 18:07:11.701142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:11.800 [2024-11-20 18:07:11.701156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.800 qpair failed and we were unable to recover it. 00:40:12.063 [2024-11-20 18:07:11.711083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.063 [2024-11-20 18:07:11.711133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.063 [2024-11-20 18:07:11.711146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.063 [2024-11-20 18:07:11.711153] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.063 [2024-11-20 18:07:11.711164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.063 [2024-11-20 18:07:11.711178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.063 qpair failed and we were unable to recover it. 00:40:12.063 [2024-11-20 18:07:11.721133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.063 [2024-11-20 18:07:11.721192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.063 [2024-11-20 18:07:11.721205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.063 [2024-11-20 18:07:11.721212] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.063 [2024-11-20 18:07:11.721219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.063 [2024-11-20 18:07:11.721232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.063 qpair failed and we were unable to recover it. 00:40:12.063 [2024-11-20 18:07:11.731139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.063 [2024-11-20 18:07:11.731195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.063 [2024-11-20 18:07:11.731209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.063 [2024-11-20 18:07:11.731215] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.063 [2024-11-20 18:07:11.731222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.063 [2024-11-20 18:07:11.731235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.063 qpair failed and we were unable to recover it. 00:40:12.063 [2024-11-20 18:07:11.741175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.063 [2024-11-20 18:07:11.741275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.063 [2024-11-20 18:07:11.741288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.063 [2024-11-20 18:07:11.741295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.063 [2024-11-20 18:07:11.741301] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.063 [2024-11-20 18:07:11.741315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.063 qpair failed and we were unable to recover it. 00:40:12.063 [2024-11-20 18:07:11.751179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.063 [2024-11-20 18:07:11.751237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.063 [2024-11-20 18:07:11.751250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.063 [2024-11-20 18:07:11.751257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.063 [2024-11-20 18:07:11.751263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.063 [2024-11-20 18:07:11.751276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.063 qpair failed and we were unable to recover it. 00:40:12.063 [2024-11-20 18:07:11.761238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.063 [2024-11-20 18:07:11.761336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.063 [2024-11-20 18:07:11.761349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.063 [2024-11-20 18:07:11.761355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.063 [2024-11-20 18:07:11.761361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.063 [2024-11-20 18:07:11.761375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.063 qpair failed and we were unable to recover it. 00:40:12.063 [2024-11-20 18:07:11.771282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.063 [2024-11-20 18:07:11.771342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.063 [2024-11-20 18:07:11.771356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.063 [2024-11-20 18:07:11.771363] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.063 [2024-11-20 18:07:11.771369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.063 [2024-11-20 18:07:11.771383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.063 qpair failed and we were unable to recover it. 00:40:12.063 [2024-11-20 18:07:11.781310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.063 [2024-11-20 18:07:11.781368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.063 [2024-11-20 18:07:11.781381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.063 [2024-11-20 18:07:11.781391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.063 [2024-11-20 18:07:11.781397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.063 [2024-11-20 18:07:11.781411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.063 qpair failed and we were unable to recover it. 00:40:12.063 [2024-11-20 18:07:11.791290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.063 [2024-11-20 18:07:11.791339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.063 [2024-11-20 18:07:11.791352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.063 [2024-11-20 18:07:11.791359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.063 [2024-11-20 18:07:11.791365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.063 [2024-11-20 18:07:11.791378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.063 qpair failed and we were unable to recover it. 00:40:12.063 [2024-11-20 18:07:11.801285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.063 [2024-11-20 18:07:11.801372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.063 [2024-11-20 18:07:11.801384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.063 [2024-11-20 18:07:11.801391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.064 [2024-11-20 18:07:11.801397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.064 [2024-11-20 18:07:11.801411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.064 qpair failed and we were unable to recover it. 00:40:12.064 [2024-11-20 18:07:11.811315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.064 [2024-11-20 18:07:11.811365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.064 [2024-11-20 18:07:11.811379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.064 [2024-11-20 18:07:11.811386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.064 [2024-11-20 18:07:11.811392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.064 [2024-11-20 18:07:11.811405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.064 qpair failed and we were unable to recover it. 00:40:12.064 [2024-11-20 18:07:11.821383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.064 [2024-11-20 18:07:11.821462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.064 [2024-11-20 18:07:11.821475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.064 [2024-11-20 18:07:11.821482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.064 [2024-11-20 18:07:11.821488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.064 [2024-11-20 18:07:11.821501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.064 qpair failed and we were unable to recover it. 00:40:12.064 [2024-11-20 18:07:11.831447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.064 [2024-11-20 18:07:11.831500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.064 [2024-11-20 18:07:11.831514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.064 [2024-11-20 18:07:11.831521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.064 [2024-11-20 18:07:11.831527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.064 [2024-11-20 18:07:11.831541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.064 qpair failed and we were unable to recover it. 00:40:12.064 [2024-11-20 18:07:11.841326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.064 [2024-11-20 18:07:11.841381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.064 [2024-11-20 18:07:11.841394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.064 [2024-11-20 18:07:11.841401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.064 [2024-11-20 18:07:11.841407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.064 [2024-11-20 18:07:11.841421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.064 qpair failed and we were unable to recover it. 00:40:12.064 [2024-11-20 18:07:11.851492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.064 [2024-11-20 18:07:11.851559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.064 [2024-11-20 18:07:11.851572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.064 [2024-11-20 18:07:11.851579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.064 [2024-11-20 18:07:11.851585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.064 [2024-11-20 18:07:11.851599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.064 qpair failed and we were unable to recover it. 00:40:12.064 [2024-11-20 18:07:11.861453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.064 [2024-11-20 18:07:11.861510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.064 [2024-11-20 18:07:11.861522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.064 [2024-11-20 18:07:11.861529] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.064 [2024-11-20 18:07:11.861535] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.064 [2024-11-20 18:07:11.861549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.064 qpair failed and we were unable to recover it. 00:40:12.064 [2024-11-20 18:07:11.871524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.064 [2024-11-20 18:07:11.871572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.064 [2024-11-20 18:07:11.871586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.064 [2024-11-20 18:07:11.871596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.064 [2024-11-20 18:07:11.871602] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.064 [2024-11-20 18:07:11.871615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.064 qpair failed and we were unable to recover it. 00:40:12.064 [2024-11-20 18:07:11.881571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.064 [2024-11-20 18:07:11.881626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.064 [2024-11-20 18:07:11.881639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.064 [2024-11-20 18:07:11.881646] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.064 [2024-11-20 18:07:11.881652] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.064 [2024-11-20 18:07:11.881665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.064 qpair failed and we were unable to recover it. 00:40:12.064 [2024-11-20 18:07:11.891592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.064 [2024-11-20 18:07:11.891645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.064 [2024-11-20 18:07:11.891658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.064 [2024-11-20 18:07:11.891664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.064 [2024-11-20 18:07:11.891671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.064 [2024-11-20 18:07:11.891683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.064 qpair failed and we were unable to recover it. 00:40:12.064 [2024-11-20 18:07:11.901593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.064 [2024-11-20 18:07:11.901642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.064 [2024-11-20 18:07:11.901654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.064 [2024-11-20 18:07:11.901661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.064 [2024-11-20 18:07:11.901667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.064 [2024-11-20 18:07:11.901680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.064 qpair failed and we were unable to recover it. 00:40:12.064 [2024-11-20 18:07:11.911644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.064 [2024-11-20 18:07:11.911700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.064 [2024-11-20 18:07:11.911713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.064 [2024-11-20 18:07:11.911720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.064 [2024-11-20 18:07:11.911726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.064 [2024-11-20 18:07:11.911740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.064 qpair failed and we were unable to recover it. 00:40:12.064 [2024-11-20 18:07:11.921656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.064 [2024-11-20 18:07:11.921706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.064 [2024-11-20 18:07:11.921719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.064 [2024-11-20 18:07:11.921725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.064 [2024-11-20 18:07:11.921732] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.064 [2024-11-20 18:07:11.921745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.064 qpair failed and we were unable to recover it. 00:40:12.064 [2024-11-20 18:07:11.931690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.064 [2024-11-20 18:07:11.931745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.064 [2024-11-20 18:07:11.931758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.064 [2024-11-20 18:07:11.931765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.064 [2024-11-20 18:07:11.931771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.064 [2024-11-20 18:07:11.931784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.064 qpair failed and we were unable to recover it. 00:40:12.064 [2024-11-20 18:07:11.941698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.065 [2024-11-20 18:07:11.941748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.065 [2024-11-20 18:07:11.941762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.065 [2024-11-20 18:07:11.941769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.065 [2024-11-20 18:07:11.941775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.065 [2024-11-20 18:07:11.941788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.065 qpair failed and we were unable to recover it. 00:40:12.065 [2024-11-20 18:07:11.951722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.065 [2024-11-20 18:07:11.951772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.065 [2024-11-20 18:07:11.951785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.065 [2024-11-20 18:07:11.951792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.065 [2024-11-20 18:07:11.951798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.065 [2024-11-20 18:07:11.951811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.065 qpair failed and we were unable to recover it. 00:40:12.065 [2024-11-20 18:07:11.961769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.065 [2024-11-20 18:07:11.961871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.065 [2024-11-20 18:07:11.961885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.065 [2024-11-20 18:07:11.961895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.065 [2024-11-20 18:07:11.961902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.065 [2024-11-20 18:07:11.961915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.065 qpair failed and we were unable to recover it. 00:40:12.065 [2024-11-20 18:07:11.971817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.065 [2024-11-20 18:07:11.971878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.065 [2024-11-20 18:07:11.971891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.065 [2024-11-20 18:07:11.971898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.065 [2024-11-20 18:07:11.971904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.065 [2024-11-20 18:07:11.971917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.065 qpair failed and we were unable to recover it. 00:40:12.327 [2024-11-20 18:07:11.981838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.327 [2024-11-20 18:07:11.981897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.327 [2024-11-20 18:07:11.981910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.327 [2024-11-20 18:07:11.981918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.327 [2024-11-20 18:07:11.981924] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.327 [2024-11-20 18:07:11.981937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.327 qpair failed and we were unable to recover it. 00:40:12.327 [2024-11-20 18:07:11.991851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.327 [2024-11-20 18:07:11.991948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.327 [2024-11-20 18:07:11.991961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.327 [2024-11-20 18:07:11.991968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.327 [2024-11-20 18:07:11.991974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.327 [2024-11-20 18:07:11.991988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.327 qpair failed and we were unable to recover it. 00:40:12.327 [2024-11-20 18:07:12.001903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.327 [2024-11-20 18:07:12.001964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.327 [2024-11-20 18:07:12.001977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.327 [2024-11-20 18:07:12.001984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.327 [2024-11-20 18:07:12.001990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.327 [2024-11-20 18:07:12.002004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.327 qpair failed and we were unable to recover it. 00:40:12.327 [2024-11-20 18:07:12.011928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.327 [2024-11-20 18:07:12.011982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.327 [2024-11-20 18:07:12.011995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.327 [2024-11-20 18:07:12.012002] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.327 [2024-11-20 18:07:12.012008] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.327 [2024-11-20 18:07:12.012021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.327 qpair failed and we were unable to recover it. 00:40:12.327 [2024-11-20 18:07:12.021818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.327 [2024-11-20 18:07:12.021879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.327 [2024-11-20 18:07:12.021892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.327 [2024-11-20 18:07:12.021898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.327 [2024-11-20 18:07:12.021905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.327 [2024-11-20 18:07:12.021918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.327 qpair failed and we were unable to recover it. 00:40:12.327 [2024-11-20 18:07:12.031934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.327 [2024-11-20 18:07:12.031988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.327 [2024-11-20 18:07:12.032001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.327 [2024-11-20 18:07:12.032007] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.327 [2024-11-20 18:07:12.032014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.327 [2024-11-20 18:07:12.032027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.327 qpair failed and we were unable to recover it. 00:40:12.327 [2024-11-20 18:07:12.042014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.327 [2024-11-20 18:07:12.042065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.327 [2024-11-20 18:07:12.042078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.327 [2024-11-20 18:07:12.042084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.327 [2024-11-20 18:07:12.042090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.327 [2024-11-20 18:07:12.042104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.327 qpair failed and we were unable to recover it. 00:40:12.327 [2024-11-20 18:07:12.051911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.327 [2024-11-20 18:07:12.051983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.327 [2024-11-20 18:07:12.051997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.327 [2024-11-20 18:07:12.052007] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.327 [2024-11-20 18:07:12.052013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.327 [2024-11-20 18:07:12.052026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.327 qpair failed and we were unable to recover it. 00:40:12.327 [2024-11-20 18:07:12.062059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.327 [2024-11-20 18:07:12.062113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.327 [2024-11-20 18:07:12.062126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.327 [2024-11-20 18:07:12.062132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.327 [2024-11-20 18:07:12.062139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.327 [2024-11-20 18:07:12.062152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.327 qpair failed and we were unable to recover it. 00:40:12.327 [2024-11-20 18:07:12.072126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.327 [2024-11-20 18:07:12.072212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.327 [2024-11-20 18:07:12.072225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.327 [2024-11-20 18:07:12.072232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.327 [2024-11-20 18:07:12.072238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.327 [2024-11-20 18:07:12.072251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.327 qpair failed and we were unable to recover it. 00:40:12.327 [2024-11-20 18:07:12.081998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.327 [2024-11-20 18:07:12.082054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.327 [2024-11-20 18:07:12.082067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.327 [2024-11-20 18:07:12.082074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.327 [2024-11-20 18:07:12.082080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.327 [2024-11-20 18:07:12.082094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.327 qpair failed and we were unable to recover it. 00:40:12.328 [2024-11-20 18:07:12.092149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.328 [2024-11-20 18:07:12.092207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.328 [2024-11-20 18:07:12.092220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.328 [2024-11-20 18:07:12.092227] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.328 [2024-11-20 18:07:12.092233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.328 [2024-11-20 18:07:12.092246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.328 qpair failed and we were unable to recover it. 00:40:12.328 [2024-11-20 18:07:12.102155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.328 [2024-11-20 18:07:12.102216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.328 [2024-11-20 18:07:12.102228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.328 [2024-11-20 18:07:12.102235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.328 [2024-11-20 18:07:12.102241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.328 [2024-11-20 18:07:12.102254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.328 qpair failed and we were unable to recover it. 00:40:12.328 [2024-11-20 18:07:12.112194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.328 [2024-11-20 18:07:12.112245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.328 [2024-11-20 18:07:12.112258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.328 [2024-11-20 18:07:12.112265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.328 [2024-11-20 18:07:12.112271] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.328 [2024-11-20 18:07:12.112284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.328 qpair failed and we were unable to recover it. 00:40:12.328 [2024-11-20 18:07:12.122246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.328 [2024-11-20 18:07:12.122344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.328 [2024-11-20 18:07:12.122357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.328 [2024-11-20 18:07:12.122364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.328 [2024-11-20 18:07:12.122370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.328 [2024-11-20 18:07:12.122383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.328 qpair failed and we were unable to recover it. 00:40:12.328 [2024-11-20 18:07:12.132257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.328 [2024-11-20 18:07:12.132312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.328 [2024-11-20 18:07:12.132326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.328 [2024-11-20 18:07:12.132332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.328 [2024-11-20 18:07:12.132339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.328 [2024-11-20 18:07:12.132352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.328 qpair failed and we were unable to recover it. 00:40:12.328 [2024-11-20 18:07:12.142177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.328 [2024-11-20 18:07:12.142227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.328 [2024-11-20 18:07:12.142245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.328 [2024-11-20 18:07:12.142252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.328 [2024-11-20 18:07:12.142258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.328 [2024-11-20 18:07:12.142273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.328 qpair failed and we were unable to recover it. 00:40:12.328 [2024-11-20 18:07:12.152307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.328 [2024-11-20 18:07:12.152359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.328 [2024-11-20 18:07:12.152373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.328 [2024-11-20 18:07:12.152380] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.328 [2024-11-20 18:07:12.152386] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.328 [2024-11-20 18:07:12.152399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.328 qpair failed and we were unable to recover it. 00:40:12.328 [2024-11-20 18:07:12.162221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.328 [2024-11-20 18:07:12.162280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.328 [2024-11-20 18:07:12.162293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.328 [2024-11-20 18:07:12.162299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.328 [2024-11-20 18:07:12.162305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.328 [2024-11-20 18:07:12.162319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.328 qpair failed and we were unable to recover it. 00:40:12.328 [2024-11-20 18:07:12.172380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.328 [2024-11-20 18:07:12.172434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.328 [2024-11-20 18:07:12.172447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.328 [2024-11-20 18:07:12.172454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.328 [2024-11-20 18:07:12.172460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.328 [2024-11-20 18:07:12.172474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.328 qpair failed and we were unable to recover it. 00:40:12.328 [2024-11-20 18:07:12.182362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.328 [2024-11-20 18:07:12.182454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.328 [2024-11-20 18:07:12.182467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.328 [2024-11-20 18:07:12.182473] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.328 [2024-11-20 18:07:12.182480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.328 [2024-11-20 18:07:12.182493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.328 qpair failed and we were unable to recover it. 00:40:12.328 [2024-11-20 18:07:12.192442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.328 [2024-11-20 18:07:12.192531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.328 [2024-11-20 18:07:12.192544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.328 [2024-11-20 18:07:12.192550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.328 [2024-11-20 18:07:12.192556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.328 [2024-11-20 18:07:12.192569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.328 qpair failed and we were unable to recover it. 00:40:12.328 [2024-11-20 18:07:12.202455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.328 [2024-11-20 18:07:12.202513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.328 [2024-11-20 18:07:12.202526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.328 [2024-11-20 18:07:12.202533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.328 [2024-11-20 18:07:12.202539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.328 [2024-11-20 18:07:12.202552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.328 qpair failed and we were unable to recover it. 00:40:12.328 [2024-11-20 18:07:12.212501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.328 [2024-11-20 18:07:12.212555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.328 [2024-11-20 18:07:12.212568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.328 [2024-11-20 18:07:12.212574] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.328 [2024-11-20 18:07:12.212580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.328 [2024-11-20 18:07:12.212594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.328 qpair failed and we were unable to recover it. 00:40:12.328 [2024-11-20 18:07:12.222482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.328 [2024-11-20 18:07:12.222535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.329 [2024-11-20 18:07:12.222547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.329 [2024-11-20 18:07:12.222554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.329 [2024-11-20 18:07:12.222560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.329 [2024-11-20 18:07:12.222574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.329 qpair failed and we were unable to recover it. 00:40:12.329 [2024-11-20 18:07:12.232538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.329 [2024-11-20 18:07:12.232589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.329 [2024-11-20 18:07:12.232610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.329 [2024-11-20 18:07:12.232617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.329 [2024-11-20 18:07:12.232625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.329 [2024-11-20 18:07:12.232640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.329 qpair failed and we were unable to recover it. 00:40:12.590 [2024-11-20 18:07:12.242564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.590 [2024-11-20 18:07:12.242662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.590 [2024-11-20 18:07:12.242676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.590 [2024-11-20 18:07:12.242683] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.590 [2024-11-20 18:07:12.242689] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.590 [2024-11-20 18:07:12.242703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.590 qpair failed and we were unable to recover it. 00:40:12.590 [2024-11-20 18:07:12.252616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.590 [2024-11-20 18:07:12.252672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.590 [2024-11-20 18:07:12.252685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.590 [2024-11-20 18:07:12.252692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.590 [2024-11-20 18:07:12.252698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.590 [2024-11-20 18:07:12.252711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.590 qpair failed and we were unable to recover it. 00:40:12.590 [2024-11-20 18:07:12.262512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.590 [2024-11-20 18:07:12.262565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.590 [2024-11-20 18:07:12.262578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.590 [2024-11-20 18:07:12.262585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.590 [2024-11-20 18:07:12.262591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.590 [2024-11-20 18:07:12.262604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.590 qpair failed and we were unable to recover it. 00:40:12.590 [2024-11-20 18:07:12.272650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.590 [2024-11-20 18:07:12.272701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.590 [2024-11-20 18:07:12.272714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.590 [2024-11-20 18:07:12.272721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.590 [2024-11-20 18:07:12.272727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.590 [2024-11-20 18:07:12.272744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.590 qpair failed and we were unable to recover it. 00:40:12.590 [2024-11-20 18:07:12.282722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.590 [2024-11-20 18:07:12.282776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.590 [2024-11-20 18:07:12.282789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.590 [2024-11-20 18:07:12.282796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.590 [2024-11-20 18:07:12.282802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.590 [2024-11-20 18:07:12.282815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.590 qpair failed and we were unable to recover it. 00:40:12.590 [2024-11-20 18:07:12.292733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.590 [2024-11-20 18:07:12.292793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.590 [2024-11-20 18:07:12.292806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.590 [2024-11-20 18:07:12.292813] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.590 [2024-11-20 18:07:12.292819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.590 [2024-11-20 18:07:12.292833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.590 qpair failed and we were unable to recover it. 00:40:12.591 [2024-11-20 18:07:12.302822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.591 [2024-11-20 18:07:12.302872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.591 [2024-11-20 18:07:12.302884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.591 [2024-11-20 18:07:12.302891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.591 [2024-11-20 18:07:12.302897] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.591 [2024-11-20 18:07:12.302911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.591 qpair failed and we were unable to recover it. 00:40:12.591 [2024-11-20 18:07:12.312754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.591 [2024-11-20 18:07:12.312806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.591 [2024-11-20 18:07:12.312820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.591 [2024-11-20 18:07:12.312826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.591 [2024-11-20 18:07:12.312833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.591 [2024-11-20 18:07:12.312846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.591 qpair failed and we were unable to recover it. 00:40:12.591 [2024-11-20 18:07:12.322814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.591 [2024-11-20 18:07:12.322865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.591 [2024-11-20 18:07:12.322881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.591 [2024-11-20 18:07:12.322888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.591 [2024-11-20 18:07:12.322894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.591 [2024-11-20 18:07:12.322907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.591 qpair failed and we were unable to recover it. 00:40:12.591 [2024-11-20 18:07:12.332724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.591 [2024-11-20 18:07:12.332790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.591 [2024-11-20 18:07:12.332803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.591 [2024-11-20 18:07:12.332810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.591 [2024-11-20 18:07:12.332816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.591 [2024-11-20 18:07:12.332830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.591 qpair failed and we were unable to recover it. 00:40:12.591 [2024-11-20 18:07:12.342839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.591 [2024-11-20 18:07:12.342896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.591 [2024-11-20 18:07:12.342908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.591 [2024-11-20 18:07:12.342915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.591 [2024-11-20 18:07:12.342922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.591 [2024-11-20 18:07:12.342935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.591 qpair failed and we were unable to recover it. 00:40:12.591 [2024-11-20 18:07:12.352928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.591 [2024-11-20 18:07:12.353014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.591 [2024-11-20 18:07:12.353040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.591 [2024-11-20 18:07:12.353048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.591 [2024-11-20 18:07:12.353055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.591 [2024-11-20 18:07:12.353074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.591 qpair failed and we were unable to recover it. 00:40:12.591 [2024-11-20 18:07:12.362929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.591 [2024-11-20 18:07:12.363011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.591 [2024-11-20 18:07:12.363026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.591 [2024-11-20 18:07:12.363034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.591 [2024-11-20 18:07:12.363040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.591 [2024-11-20 18:07:12.363061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.591 qpair failed and we were unable to recover it. 00:40:12.591 [2024-11-20 18:07:12.372988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.591 [2024-11-20 18:07:12.373044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.591 [2024-11-20 18:07:12.373058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.591 [2024-11-20 18:07:12.373065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.591 [2024-11-20 18:07:12.373071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.591 [2024-11-20 18:07:12.373086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.591 qpair failed and we were unable to recover it. 00:40:12.591 [2024-11-20 18:07:12.382968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.591 [2024-11-20 18:07:12.383019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.591 [2024-11-20 18:07:12.383032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.591 [2024-11-20 18:07:12.383039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.591 [2024-11-20 18:07:12.383045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.591 [2024-11-20 18:07:12.383059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.591 qpair failed and we were unable to recover it. 00:40:12.591 [2024-11-20 18:07:12.392961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.591 [2024-11-20 18:07:12.393013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.591 [2024-11-20 18:07:12.393026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.591 [2024-11-20 18:07:12.393033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.591 [2024-11-20 18:07:12.393039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.591 [2024-11-20 18:07:12.393052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.591 qpair failed and we were unable to recover it. 00:40:12.592 [2024-11-20 18:07:12.403013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.592 [2024-11-20 18:07:12.403090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.592 [2024-11-20 18:07:12.403104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.592 [2024-11-20 18:07:12.403110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.592 [2024-11-20 18:07:12.403116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.592 [2024-11-20 18:07:12.403129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.592 qpair failed and we were unable to recover it. 00:40:12.592 [2024-11-20 18:07:12.413122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.592 [2024-11-20 18:07:12.413178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.592 [2024-11-20 18:07:12.413195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.592 [2024-11-20 18:07:12.413201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.592 [2024-11-20 18:07:12.413208] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.592 [2024-11-20 18:07:12.413222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.592 qpair failed and we were unable to recover it. 00:40:12.592 [2024-11-20 18:07:12.423072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.592 [2024-11-20 18:07:12.423122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.592 [2024-11-20 18:07:12.423135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.592 [2024-11-20 18:07:12.423142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.592 [2024-11-20 18:07:12.423149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.592 [2024-11-20 18:07:12.423166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.592 qpair failed and we were unable to recover it. 00:40:12.592 [2024-11-20 18:07:12.432988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.592 [2024-11-20 18:07:12.433046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.592 [2024-11-20 18:07:12.433059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.592 [2024-11-20 18:07:12.433066] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.592 [2024-11-20 18:07:12.433072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.592 [2024-11-20 18:07:12.433086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.592 qpair failed and we were unable to recover it. 00:40:12.592 [2024-11-20 18:07:12.443171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.592 [2024-11-20 18:07:12.443232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.592 [2024-11-20 18:07:12.443246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.592 [2024-11-20 18:07:12.443252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.592 [2024-11-20 18:07:12.443259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.592 [2024-11-20 18:07:12.443273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.592 qpair failed and we were unable to recover it. 00:40:12.592 [2024-11-20 18:07:12.453191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.592 [2024-11-20 18:07:12.453246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.592 [2024-11-20 18:07:12.453260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.592 [2024-11-20 18:07:12.453266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.592 [2024-11-20 18:07:12.453273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.592 [2024-11-20 18:07:12.453289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.592 qpair failed and we were unable to recover it. 00:40:12.592 [2024-11-20 18:07:12.463196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.592 [2024-11-20 18:07:12.463246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.592 [2024-11-20 18:07:12.463259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.592 [2024-11-20 18:07:12.463266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.592 [2024-11-20 18:07:12.463272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.592 [2024-11-20 18:07:12.463286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.592 qpair failed and we were unable to recover it. 00:40:12.592 [2024-11-20 18:07:12.473227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.592 [2024-11-20 18:07:12.473280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.592 [2024-11-20 18:07:12.473293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.592 [2024-11-20 18:07:12.473300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.592 [2024-11-20 18:07:12.473306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.592 [2024-11-20 18:07:12.473319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.592 qpair failed and we were unable to recover it. 00:40:12.592 [2024-11-20 18:07:12.483305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.592 [2024-11-20 18:07:12.483359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.592 [2024-11-20 18:07:12.483372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.592 [2024-11-20 18:07:12.483379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.592 [2024-11-20 18:07:12.483385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.592 [2024-11-20 18:07:12.483399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.592 qpair failed and we were unable to recover it. 00:40:12.592 [2024-11-20 18:07:12.493328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.592 [2024-11-20 18:07:12.493384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.592 [2024-11-20 18:07:12.493398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.592 [2024-11-20 18:07:12.493404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.592 [2024-11-20 18:07:12.493410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.592 [2024-11-20 18:07:12.493424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.592 qpair failed and we were unable to recover it. 00:40:12.855 [2024-11-20 18:07:12.503308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.855 [2024-11-20 18:07:12.503359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.855 [2024-11-20 18:07:12.503375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.855 [2024-11-20 18:07:12.503382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.855 [2024-11-20 18:07:12.503388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.855 [2024-11-20 18:07:12.503402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.855 qpair failed and we were unable to recover it. 00:40:12.855 [2024-11-20 18:07:12.513347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.855 [2024-11-20 18:07:12.513400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.855 [2024-11-20 18:07:12.513415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.855 [2024-11-20 18:07:12.513422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.855 [2024-11-20 18:07:12.513428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.855 [2024-11-20 18:07:12.513442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.855 qpair failed and we were unable to recover it. 00:40:12.855 [2024-11-20 18:07:12.523468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.855 [2024-11-20 18:07:12.523532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.855 [2024-11-20 18:07:12.523545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.855 [2024-11-20 18:07:12.523552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.855 [2024-11-20 18:07:12.523558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.855 [2024-11-20 18:07:12.523571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.855 qpair failed and we were unable to recover it. 00:40:12.855 [2024-11-20 18:07:12.533341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.855 [2024-11-20 18:07:12.533404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.855 [2024-11-20 18:07:12.533417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.855 [2024-11-20 18:07:12.533424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.855 [2024-11-20 18:07:12.533430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.855 [2024-11-20 18:07:12.533443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.855 qpair failed and we were unable to recover it. 00:40:12.855 [2024-11-20 18:07:12.543448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.855 [2024-11-20 18:07:12.543505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.855 [2024-11-20 18:07:12.543518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.855 [2024-11-20 18:07:12.543526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.855 [2024-11-20 18:07:12.543532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.855 [2024-11-20 18:07:12.543549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.855 qpair failed and we were unable to recover it. 00:40:12.855 [2024-11-20 18:07:12.553386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.855 [2024-11-20 18:07:12.553437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.855 [2024-11-20 18:07:12.553450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.855 [2024-11-20 18:07:12.553456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.855 [2024-11-20 18:07:12.553463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.855 [2024-11-20 18:07:12.553476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.855 qpair failed and we were unable to recover it. 00:40:12.855 [2024-11-20 18:07:12.563495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.855 [2024-11-20 18:07:12.563546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.855 [2024-11-20 18:07:12.563559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.855 [2024-11-20 18:07:12.563566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.855 [2024-11-20 18:07:12.563572] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.855 [2024-11-20 18:07:12.563586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.855 qpair failed and we were unable to recover it. 00:40:12.855 [2024-11-20 18:07:12.573508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.855 [2024-11-20 18:07:12.573561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.855 [2024-11-20 18:07:12.573575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.855 [2024-11-20 18:07:12.573582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.855 [2024-11-20 18:07:12.573588] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.855 [2024-11-20 18:07:12.573601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.855 qpair failed and we were unable to recover it. 00:40:12.855 [2024-11-20 18:07:12.583534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.855 [2024-11-20 18:07:12.583587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.855 [2024-11-20 18:07:12.583600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.855 [2024-11-20 18:07:12.583607] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.855 [2024-11-20 18:07:12.583613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.855 [2024-11-20 18:07:12.583626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.855 qpair failed and we were unable to recover it. 00:40:12.855 [2024-11-20 18:07:12.593512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.855 [2024-11-20 18:07:12.593562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.855 [2024-11-20 18:07:12.593579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.855 [2024-11-20 18:07:12.593585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.855 [2024-11-20 18:07:12.593591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.855 [2024-11-20 18:07:12.593605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.855 qpair failed and we were unable to recover it. 00:40:12.855 [2024-11-20 18:07:12.603619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.855 [2024-11-20 18:07:12.603716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.855 [2024-11-20 18:07:12.603729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.855 [2024-11-20 18:07:12.603736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.855 [2024-11-20 18:07:12.603742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.855 [2024-11-20 18:07:12.603756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.855 qpair failed and we were unable to recover it. 00:40:12.855 [2024-11-20 18:07:12.613627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.855 [2024-11-20 18:07:12.613683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.855 [2024-11-20 18:07:12.613696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.855 [2024-11-20 18:07:12.613703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.855 [2024-11-20 18:07:12.613710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.855 [2024-11-20 18:07:12.613723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.855 qpair failed and we were unable to recover it. 00:40:12.855 [2024-11-20 18:07:12.623640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.855 [2024-11-20 18:07:12.623692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.855 [2024-11-20 18:07:12.623704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.855 [2024-11-20 18:07:12.623711] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.856 [2024-11-20 18:07:12.623718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.856 [2024-11-20 18:07:12.623731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.856 qpair failed and we were unable to recover it. 00:40:12.856 [2024-11-20 18:07:12.633634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.856 [2024-11-20 18:07:12.633686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.856 [2024-11-20 18:07:12.633699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.856 [2024-11-20 18:07:12.633705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.856 [2024-11-20 18:07:12.633712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.856 [2024-11-20 18:07:12.633729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.856 qpair failed and we were unable to recover it. 00:40:12.856 [2024-11-20 18:07:12.643706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.856 [2024-11-20 18:07:12.643792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.856 [2024-11-20 18:07:12.643805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.856 [2024-11-20 18:07:12.643811] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.856 [2024-11-20 18:07:12.643818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.856 [2024-11-20 18:07:12.643831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.856 qpair failed and we were unable to recover it. 00:40:12.856 [2024-11-20 18:07:12.653676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.856 [2024-11-20 18:07:12.653735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.856 [2024-11-20 18:07:12.653748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.856 [2024-11-20 18:07:12.653754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.856 [2024-11-20 18:07:12.653761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.856 [2024-11-20 18:07:12.653774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.856 qpair failed and we were unable to recover it. 00:40:12.856 [2024-11-20 18:07:12.663744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.856 [2024-11-20 18:07:12.663793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.856 [2024-11-20 18:07:12.663805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.856 [2024-11-20 18:07:12.663812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.856 [2024-11-20 18:07:12.663818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.856 [2024-11-20 18:07:12.663832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.856 qpair failed and we were unable to recover it. 00:40:12.856 [2024-11-20 18:07:12.673731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.856 [2024-11-20 18:07:12.673777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.856 [2024-11-20 18:07:12.673791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.856 [2024-11-20 18:07:12.673799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.856 [2024-11-20 18:07:12.673805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.856 [2024-11-20 18:07:12.673818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.856 qpair failed and we were unable to recover it. 00:40:12.856 [2024-11-20 18:07:12.683795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.856 [2024-11-20 18:07:12.683851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.856 [2024-11-20 18:07:12.683867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.856 [2024-11-20 18:07:12.683874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.856 [2024-11-20 18:07:12.683880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.856 [2024-11-20 18:07:12.683893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.856 qpair failed and we were unable to recover it. 00:40:12.856 [2024-11-20 18:07:12.693807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.856 [2024-11-20 18:07:12.693865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.856 [2024-11-20 18:07:12.693890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.856 [2024-11-20 18:07:12.693899] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.856 [2024-11-20 18:07:12.693905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.856 [2024-11-20 18:07:12.693924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.856 qpair failed and we were unable to recover it. 00:40:12.856 [2024-11-20 18:07:12.703854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.856 [2024-11-20 18:07:12.703906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.856 [2024-11-20 18:07:12.703924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.856 [2024-11-20 18:07:12.703932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.856 [2024-11-20 18:07:12.703938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.856 [2024-11-20 18:07:12.703954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.856 qpair failed and we were unable to recover it. 00:40:12.856 [2024-11-20 18:07:12.713720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.856 [2024-11-20 18:07:12.713773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.856 [2024-11-20 18:07:12.713787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.856 [2024-11-20 18:07:12.713794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.856 [2024-11-20 18:07:12.713801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.856 [2024-11-20 18:07:12.713815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.856 qpair failed and we were unable to recover it. 00:40:12.856 [2024-11-20 18:07:12.723935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.856 [2024-11-20 18:07:12.723994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.856 [2024-11-20 18:07:12.724008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.856 [2024-11-20 18:07:12.724015] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.856 [2024-11-20 18:07:12.724025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.856 [2024-11-20 18:07:12.724039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.856 qpair failed and we were unable to recover it. 00:40:12.856 [2024-11-20 18:07:12.733920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.856 [2024-11-20 18:07:12.733972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.856 [2024-11-20 18:07:12.733986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.856 [2024-11-20 18:07:12.733993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.856 [2024-11-20 18:07:12.733999] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.856 [2024-11-20 18:07:12.734012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.856 qpair failed and we were unable to recover it. 00:40:12.856 [2024-11-20 18:07:12.743950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.856 [2024-11-20 18:07:12.744015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.856 [2024-11-20 18:07:12.744028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.856 [2024-11-20 18:07:12.744035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.856 [2024-11-20 18:07:12.744042] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.856 [2024-11-20 18:07:12.744055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.856 qpair failed and we were unable to recover it. 00:40:12.856 [2024-11-20 18:07:12.753959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.856 [2024-11-20 18:07:12.754010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.857 [2024-11-20 18:07:12.754023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.857 [2024-11-20 18:07:12.754030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.857 [2024-11-20 18:07:12.754036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.857 [2024-11-20 18:07:12.754050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.857 qpair failed and we were unable to recover it. 00:40:12.857 [2024-11-20 18:07:12.764027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.857 [2024-11-20 18:07:12.764080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.857 [2024-11-20 18:07:12.764093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.857 [2024-11-20 18:07:12.764099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.857 [2024-11-20 18:07:12.764106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:12.857 [2024-11-20 18:07:12.764119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.857 qpair failed and we were unable to recover it. 00:40:13.119 [2024-11-20 18:07:12.774036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.119 [2024-11-20 18:07:12.774088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.119 [2024-11-20 18:07:12.774106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.119 [2024-11-20 18:07:12.774114] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.119 [2024-11-20 18:07:12.774120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.119 [2024-11-20 18:07:12.774134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.119 qpair failed and we were unable to recover it. 00:40:13.119 [2024-11-20 18:07:12.783952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.119 [2024-11-20 18:07:12.784078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.119 [2024-11-20 18:07:12.784093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.119 [2024-11-20 18:07:12.784100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.119 [2024-11-20 18:07:12.784106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.119 [2024-11-20 18:07:12.784120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.119 qpair failed and we were unable to recover it. 00:40:13.119 [2024-11-20 18:07:12.793936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.119 [2024-11-20 18:07:12.793982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.119 [2024-11-20 18:07:12.793996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.119 [2024-11-20 18:07:12.794003] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.119 [2024-11-20 18:07:12.794009] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.119 [2024-11-20 18:07:12.794022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.119 qpair failed and we were unable to recover it. 00:40:13.119 [2024-11-20 18:07:12.804121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.119 [2024-11-20 18:07:12.804179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.119 [2024-11-20 18:07:12.804192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.119 [2024-11-20 18:07:12.804199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.119 [2024-11-20 18:07:12.804205] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.119 [2024-11-20 18:07:12.804219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.119 qpair failed and we were unable to recover it. 00:40:13.119 [2024-11-20 18:07:12.814116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.119 [2024-11-20 18:07:12.814173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.119 [2024-11-20 18:07:12.814186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.119 [2024-11-20 18:07:12.814192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.119 [2024-11-20 18:07:12.814202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.119 [2024-11-20 18:07:12.814216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.119 qpair failed and we were unable to recover it. 00:40:13.119 [2024-11-20 18:07:12.824196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.119 [2024-11-20 18:07:12.824247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.119 [2024-11-20 18:07:12.824260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.119 [2024-11-20 18:07:12.824266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.119 [2024-11-20 18:07:12.824272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.119 [2024-11-20 18:07:12.824286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.119 qpair failed and we were unable to recover it. 00:40:13.119 [2024-11-20 18:07:12.834084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.119 [2024-11-20 18:07:12.834149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.119 [2024-11-20 18:07:12.834166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.119 [2024-11-20 18:07:12.834173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.119 [2024-11-20 18:07:12.834179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.119 [2024-11-20 18:07:12.834192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.119 qpair failed and we were unable to recover it. 00:40:13.120 [2024-11-20 18:07:12.844239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.120 [2024-11-20 18:07:12.844345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.120 [2024-11-20 18:07:12.844358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.120 [2024-11-20 18:07:12.844365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.120 [2024-11-20 18:07:12.844372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.120 [2024-11-20 18:07:12.844385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.120 qpair failed and we were unable to recover it. 00:40:13.120 [2024-11-20 18:07:12.854258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.120 [2024-11-20 18:07:12.854313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.120 [2024-11-20 18:07:12.854327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.120 [2024-11-20 18:07:12.854333] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.120 [2024-11-20 18:07:12.854339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.120 [2024-11-20 18:07:12.854352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.120 qpair failed and we were unable to recover it. 00:40:13.120 [2024-11-20 18:07:12.864291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.120 [2024-11-20 18:07:12.864344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.120 [2024-11-20 18:07:12.864358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.120 [2024-11-20 18:07:12.864364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.120 [2024-11-20 18:07:12.864370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.120 [2024-11-20 18:07:12.864384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.120 qpair failed and we were unable to recover it. 00:40:13.120 [2024-11-20 18:07:12.874301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.120 [2024-11-20 18:07:12.874350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.120 [2024-11-20 18:07:12.874364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.120 [2024-11-20 18:07:12.874371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.120 [2024-11-20 18:07:12.874377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.120 [2024-11-20 18:07:12.874390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.120 qpair failed and we were unable to recover it. 00:40:13.120 [2024-11-20 18:07:12.884318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.120 [2024-11-20 18:07:12.884404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.120 [2024-11-20 18:07:12.884417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.120 [2024-11-20 18:07:12.884424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.120 [2024-11-20 18:07:12.884430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.120 [2024-11-20 18:07:12.884443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.120 qpair failed and we were unable to recover it. 00:40:13.120 [2024-11-20 18:07:12.894338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.120 [2024-11-20 18:07:12.894390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.120 [2024-11-20 18:07:12.894402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.120 [2024-11-20 18:07:12.894409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.120 [2024-11-20 18:07:12.894415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.120 [2024-11-20 18:07:12.894428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.120 qpair failed and we were unable to recover it. 00:40:13.120 [2024-11-20 18:07:12.904316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.120 [2024-11-20 18:07:12.904367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.120 [2024-11-20 18:07:12.904380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.120 [2024-11-20 18:07:12.904386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.120 [2024-11-20 18:07:12.904396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.120 [2024-11-20 18:07:12.904409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.120 qpair failed and we were unable to recover it. 00:40:13.120 [2024-11-20 18:07:12.914393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.120 [2024-11-20 18:07:12.914444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.120 [2024-11-20 18:07:12.914458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.120 [2024-11-20 18:07:12.914465] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.120 [2024-11-20 18:07:12.914471] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.120 [2024-11-20 18:07:12.914484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.120 qpair failed and we were unable to recover it. 00:40:13.120 [2024-11-20 18:07:12.924466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.120 [2024-11-20 18:07:12.924521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.120 [2024-11-20 18:07:12.924534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.120 [2024-11-20 18:07:12.924541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.120 [2024-11-20 18:07:12.924547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.120 [2024-11-20 18:07:12.924561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.120 qpair failed and we were unable to recover it. 00:40:13.120 [2024-11-20 18:07:12.934486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.120 [2024-11-20 18:07:12.934535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.120 [2024-11-20 18:07:12.934548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.120 [2024-11-20 18:07:12.934554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.120 [2024-11-20 18:07:12.934560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.120 [2024-11-20 18:07:12.934574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.120 qpair failed and we were unable to recover it. 00:40:13.120 [2024-11-20 18:07:12.944539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.120 [2024-11-20 18:07:12.944591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.120 [2024-11-20 18:07:12.944604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.120 [2024-11-20 18:07:12.944610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.120 [2024-11-20 18:07:12.944616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.120 [2024-11-20 18:07:12.944630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.120 qpair failed and we were unable to recover it. 00:40:13.120 [2024-11-20 18:07:12.954389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.120 [2024-11-20 18:07:12.954445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.120 [2024-11-20 18:07:12.954458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.120 [2024-11-20 18:07:12.954465] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.120 [2024-11-20 18:07:12.954471] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.120 [2024-11-20 18:07:12.954484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.120 qpair failed and we were unable to recover it. 00:40:13.120 [2024-11-20 18:07:12.964585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.120 [2024-11-20 18:07:12.964642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.120 [2024-11-20 18:07:12.964654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.120 [2024-11-20 18:07:12.964661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.120 [2024-11-20 18:07:12.964667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.120 [2024-11-20 18:07:12.964681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.120 qpair failed and we were unable to recover it. 00:40:13.120 [2024-11-20 18:07:12.974600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.120 [2024-11-20 18:07:12.974680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.120 [2024-11-20 18:07:12.974693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.121 [2024-11-20 18:07:12.974700] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.121 [2024-11-20 18:07:12.974706] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.121 [2024-11-20 18:07:12.974719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.121 qpair failed and we were unable to recover it. 00:40:13.121 [2024-11-20 18:07:12.984599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.121 [2024-11-20 18:07:12.984650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.121 [2024-11-20 18:07:12.984663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.121 [2024-11-20 18:07:12.984670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.121 [2024-11-20 18:07:12.984677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.121 [2024-11-20 18:07:12.984690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.121 qpair failed and we were unable to recover it. 00:40:13.121 [2024-11-20 18:07:12.994626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.121 [2024-11-20 18:07:12.994675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.121 [2024-11-20 18:07:12.994689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.121 [2024-11-20 18:07:12.994695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.121 [2024-11-20 18:07:12.994708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.121 [2024-11-20 18:07:12.994722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.121 qpair failed and we were unable to recover it. 00:40:13.121 [2024-11-20 18:07:13.004692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.121 [2024-11-20 18:07:13.004747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.121 [2024-11-20 18:07:13.004760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.121 [2024-11-20 18:07:13.004767] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.121 [2024-11-20 18:07:13.004773] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.121 [2024-11-20 18:07:13.004787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.121 qpair failed and we were unable to recover it. 00:40:13.121 [2024-11-20 18:07:13.014681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.121 [2024-11-20 18:07:13.014763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.121 [2024-11-20 18:07:13.014777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.121 [2024-11-20 18:07:13.014783] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.121 [2024-11-20 18:07:13.014790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.121 [2024-11-20 18:07:13.014803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.121 qpair failed and we were unable to recover it. 00:40:13.121 [2024-11-20 18:07:13.024712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.121 [2024-11-20 18:07:13.024763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.121 [2024-11-20 18:07:13.024776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.121 [2024-11-20 18:07:13.024783] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.121 [2024-11-20 18:07:13.024789] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.121 [2024-11-20 18:07:13.024802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.121 qpair failed and we were unable to recover it. 00:40:13.383 [2024-11-20 18:07:13.034738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.383 [2024-11-20 18:07:13.034784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.383 [2024-11-20 18:07:13.034797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.383 [2024-11-20 18:07:13.034804] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.383 [2024-11-20 18:07:13.034810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.383 [2024-11-20 18:07:13.034824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.383 qpair failed and we were unable to recover it. 00:40:13.383 [2024-11-20 18:07:13.044798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.383 [2024-11-20 18:07:13.044881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.383 [2024-11-20 18:07:13.044894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.383 [2024-11-20 18:07:13.044901] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.383 [2024-11-20 18:07:13.044907] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.383 [2024-11-20 18:07:13.044920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.383 qpair failed and we were unable to recover it. 00:40:13.383 [2024-11-20 18:07:13.054774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.383 [2024-11-20 18:07:13.054826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.383 [2024-11-20 18:07:13.054839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.383 [2024-11-20 18:07:13.054846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.383 [2024-11-20 18:07:13.054852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.383 [2024-11-20 18:07:13.054866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.383 qpair failed and we were unable to recover it. 00:40:13.383 [2024-11-20 18:07:13.064833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.383 [2024-11-20 18:07:13.064885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.383 [2024-11-20 18:07:13.064899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.383 [2024-11-20 18:07:13.064905] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.383 [2024-11-20 18:07:13.064912] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.383 [2024-11-20 18:07:13.064925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.383 qpair failed and we were unable to recover it. 00:40:13.383 [2024-11-20 18:07:13.074865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.383 [2024-11-20 18:07:13.074911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.383 [2024-11-20 18:07:13.074925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.384 [2024-11-20 18:07:13.074931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.384 [2024-11-20 18:07:13.074937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.384 [2024-11-20 18:07:13.074951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.384 qpair failed and we were unable to recover it. 00:40:13.384 [2024-11-20 18:07:13.084805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.384 [2024-11-20 18:07:13.084870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.384 [2024-11-20 18:07:13.084882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.384 [2024-11-20 18:07:13.084889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.384 [2024-11-20 18:07:13.084899] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.384 [2024-11-20 18:07:13.084912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.384 qpair failed and we were unable to recover it. 00:40:13.384 [2024-11-20 18:07:13.094922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.384 [2024-11-20 18:07:13.094977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.384 [2024-11-20 18:07:13.095002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.384 [2024-11-20 18:07:13.095010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.384 [2024-11-20 18:07:13.095017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.384 [2024-11-20 18:07:13.095036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.384 qpair failed and we were unable to recover it. 00:40:13.384 [2024-11-20 18:07:13.105001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.384 [2024-11-20 18:07:13.105057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.384 [2024-11-20 18:07:13.105072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.384 [2024-11-20 18:07:13.105079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.384 [2024-11-20 18:07:13.105085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.384 [2024-11-20 18:07:13.105100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.384 qpair failed and we were unable to recover it. 00:40:13.384 [2024-11-20 18:07:13.114950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.384 [2024-11-20 18:07:13.114995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.384 [2024-11-20 18:07:13.115009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.384 [2024-11-20 18:07:13.115016] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.384 [2024-11-20 18:07:13.115022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.384 [2024-11-20 18:07:13.115036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.384 qpair failed and we were unable to recover it. 00:40:13.384 [2024-11-20 18:07:13.125048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.384 [2024-11-20 18:07:13.125101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.384 [2024-11-20 18:07:13.125114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.384 [2024-11-20 18:07:13.125121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.384 [2024-11-20 18:07:13.125127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.384 [2024-11-20 18:07:13.125140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.384 qpair failed and we were unable to recover it. 00:40:13.384 [2024-11-20 18:07:13.135026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.384 [2024-11-20 18:07:13.135084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.384 [2024-11-20 18:07:13.135097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.384 [2024-11-20 18:07:13.135104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.384 [2024-11-20 18:07:13.135110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.384 [2024-11-20 18:07:13.135124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.384 qpair failed and we were unable to recover it. 00:40:13.384 [2024-11-20 18:07:13.145027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.384 [2024-11-20 18:07:13.145080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.384 [2024-11-20 18:07:13.145094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.384 [2024-11-20 18:07:13.145101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.384 [2024-11-20 18:07:13.145107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.384 [2024-11-20 18:07:13.145120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.384 qpair failed and we were unable to recover it. 00:40:13.384 [2024-11-20 18:07:13.155058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.384 [2024-11-20 18:07:13.155108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.384 [2024-11-20 18:07:13.155121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.384 [2024-11-20 18:07:13.155128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.384 [2024-11-20 18:07:13.155134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.384 [2024-11-20 18:07:13.155147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.384 qpair failed and we were unable to recover it. 00:40:13.384 [2024-11-20 18:07:13.165136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.384 [2024-11-20 18:07:13.165196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.384 [2024-11-20 18:07:13.165209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.384 [2024-11-20 18:07:13.165216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.384 [2024-11-20 18:07:13.165222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.384 [2024-11-20 18:07:13.165236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.384 qpair failed and we were unable to recover it. 00:40:13.384 [2024-11-20 18:07:13.175137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.384 [2024-11-20 18:07:13.175191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.384 [2024-11-20 18:07:13.175205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.384 [2024-11-20 18:07:13.175211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.384 [2024-11-20 18:07:13.175222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.384 [2024-11-20 18:07:13.175236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.384 qpair failed and we were unable to recover it. 00:40:13.384 [2024-11-20 18:07:13.185186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.384 [2024-11-20 18:07:13.185234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.384 [2024-11-20 18:07:13.185247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.384 [2024-11-20 18:07:13.185253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.384 [2024-11-20 18:07:13.185260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.384 [2024-11-20 18:07:13.185273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.384 qpair failed and we were unable to recover it. 00:40:13.384 [2024-11-20 18:07:13.195167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.384 [2024-11-20 18:07:13.195223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.384 [2024-11-20 18:07:13.195236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.384 [2024-11-20 18:07:13.195243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.384 [2024-11-20 18:07:13.195249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.384 [2024-11-20 18:07:13.195263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.384 qpair failed and we were unable to recover it. 00:40:13.384 [2024-11-20 18:07:13.205228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.384 [2024-11-20 18:07:13.205323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.384 [2024-11-20 18:07:13.205336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.384 [2024-11-20 18:07:13.205343] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.384 [2024-11-20 18:07:13.205349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.385 [2024-11-20 18:07:13.205363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.385 qpair failed and we were unable to recover it. 00:40:13.385 [2024-11-20 18:07:13.215240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.385 [2024-11-20 18:07:13.215287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.385 [2024-11-20 18:07:13.215300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.385 [2024-11-20 18:07:13.215307] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.385 [2024-11-20 18:07:13.215313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.385 [2024-11-20 18:07:13.215327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.385 qpair failed and we were unable to recover it. 00:40:13.385 [2024-11-20 18:07:13.225302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.385 [2024-11-20 18:07:13.225357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.385 [2024-11-20 18:07:13.225370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.385 [2024-11-20 18:07:13.225377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.385 [2024-11-20 18:07:13.225383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.385 [2024-11-20 18:07:13.225396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.385 qpair failed and we were unable to recover it. 00:40:13.385 [2024-11-20 18:07:13.235166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.385 [2024-11-20 18:07:13.235211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.385 [2024-11-20 18:07:13.235224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.385 [2024-11-20 18:07:13.235231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.385 [2024-11-20 18:07:13.235237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.385 [2024-11-20 18:07:13.235250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.385 qpair failed and we were unable to recover it. 00:40:13.385 [2024-11-20 18:07:13.245399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.385 [2024-11-20 18:07:13.245468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.385 [2024-11-20 18:07:13.245481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.385 [2024-11-20 18:07:13.245488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.385 [2024-11-20 18:07:13.245494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.385 [2024-11-20 18:07:13.245507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.385 qpair failed and we were unable to recover it. 00:40:13.385 [2024-11-20 18:07:13.255351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.385 [2024-11-20 18:07:13.255476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.385 [2024-11-20 18:07:13.255489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.385 [2024-11-20 18:07:13.255496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.385 [2024-11-20 18:07:13.255502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.385 [2024-11-20 18:07:13.255515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.385 qpair failed and we were unable to recover it. 00:40:13.385 [2024-11-20 18:07:13.265288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.385 [2024-11-20 18:07:13.265340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.385 [2024-11-20 18:07:13.265352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.385 [2024-11-20 18:07:13.265362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.385 [2024-11-20 18:07:13.265369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.385 [2024-11-20 18:07:13.265382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.385 qpair failed and we were unable to recover it. 00:40:13.385 [2024-11-20 18:07:13.275358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.385 [2024-11-20 18:07:13.275449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.385 [2024-11-20 18:07:13.275462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.385 [2024-11-20 18:07:13.275469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.385 [2024-11-20 18:07:13.275475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.385 [2024-11-20 18:07:13.275488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.385 qpair failed and we were unable to recover it. 00:40:13.385 [2024-11-20 18:07:13.285456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.385 [2024-11-20 18:07:13.285513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.385 [2024-11-20 18:07:13.285526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.385 [2024-11-20 18:07:13.285533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.385 [2024-11-20 18:07:13.285539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.385 [2024-11-20 18:07:13.285552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.385 qpair failed and we were unable to recover it. 00:40:13.645 [2024-11-20 18:07:13.295474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.645 [2024-11-20 18:07:13.295521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.645 [2024-11-20 18:07:13.295533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.645 [2024-11-20 18:07:13.295540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.645 [2024-11-20 18:07:13.295546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.645 [2024-11-20 18:07:13.295559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.645 qpair failed and we were unable to recover it. 00:40:13.645 [2024-11-20 18:07:13.305512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.645 [2024-11-20 18:07:13.305566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.645 [2024-11-20 18:07:13.305579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.645 [2024-11-20 18:07:13.305586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.645 [2024-11-20 18:07:13.305592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.645 [2024-11-20 18:07:13.305605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.645 qpair failed and we were unable to recover it. 00:40:13.645 [2024-11-20 18:07:13.315509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.645 [2024-11-20 18:07:13.315565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.645 [2024-11-20 18:07:13.315578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.645 [2024-11-20 18:07:13.315585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.645 [2024-11-20 18:07:13.315591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.645 [2024-11-20 18:07:13.315605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.645 qpair failed and we were unable to recover it. 00:40:13.645 [2024-11-20 18:07:13.325521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.645 [2024-11-20 18:07:13.325574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.645 [2024-11-20 18:07:13.325586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.645 [2024-11-20 18:07:13.325593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.645 [2024-11-20 18:07:13.325599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.645 [2024-11-20 18:07:13.325612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.645 qpair failed and we were unable to recover it. 00:40:13.645 [2024-11-20 18:07:13.335557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.646 [2024-11-20 18:07:13.335620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.646 [2024-11-20 18:07:13.335633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.646 [2024-11-20 18:07:13.335639] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.646 [2024-11-20 18:07:13.335645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.646 [2024-11-20 18:07:13.335659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.646 qpair failed and we were unable to recover it. 00:40:13.646 [2024-11-20 18:07:13.345614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.646 [2024-11-20 18:07:13.345661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.646 [2024-11-20 18:07:13.345674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.646 [2024-11-20 18:07:13.345681] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.646 [2024-11-20 18:07:13.345687] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.646 [2024-11-20 18:07:13.345700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.646 qpair failed and we were unable to recover it. 00:40:13.646 [2024-11-20 18:07:13.355609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.646 [2024-11-20 18:07:13.355655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.646 [2024-11-20 18:07:13.355668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.646 [2024-11-20 18:07:13.355678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.646 [2024-11-20 18:07:13.355684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.646 [2024-11-20 18:07:13.355698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.646 qpair failed and we were unable to recover it. 00:40:13.646 [2024-11-20 18:07:13.365685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.646 [2024-11-20 18:07:13.365764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.646 [2024-11-20 18:07:13.365777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.646 [2024-11-20 18:07:13.365784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.646 [2024-11-20 18:07:13.365790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.646 [2024-11-20 18:07:13.365802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.646 qpair failed and we were unable to recover it. 00:40:13.646 [2024-11-20 18:07:13.375667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.646 [2024-11-20 18:07:13.375719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.646 [2024-11-20 18:07:13.375732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.646 [2024-11-20 18:07:13.375739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.646 [2024-11-20 18:07:13.375745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.646 [2024-11-20 18:07:13.375759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.646 qpair failed and we were unable to recover it. 00:40:13.646 [2024-11-20 18:07:13.385766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.646 [2024-11-20 18:07:13.385822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.646 [2024-11-20 18:07:13.385835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.646 [2024-11-20 18:07:13.385842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.646 [2024-11-20 18:07:13.385848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.646 [2024-11-20 18:07:13.385861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.646 qpair failed and we were unable to recover it. 00:40:13.646 [2024-11-20 18:07:13.395698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.646 [2024-11-20 18:07:13.395749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.646 [2024-11-20 18:07:13.395762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.646 [2024-11-20 18:07:13.395768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.646 [2024-11-20 18:07:13.395775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.646 [2024-11-20 18:07:13.395788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.646 qpair failed and we were unable to recover it. 00:40:13.646 [2024-11-20 18:07:13.405669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.646 [2024-11-20 18:07:13.405729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.646 [2024-11-20 18:07:13.405744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.646 [2024-11-20 18:07:13.405751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.646 [2024-11-20 18:07:13.405757] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.646 [2024-11-20 18:07:13.405771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.646 qpair failed and we were unable to recover it. 00:40:13.646 [2024-11-20 18:07:13.415782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.646 [2024-11-20 18:07:13.415878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.646 [2024-11-20 18:07:13.415891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.646 [2024-11-20 18:07:13.415898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.646 [2024-11-20 18:07:13.415904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.646 [2024-11-20 18:07:13.415918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.646 qpair failed and we were unable to recover it. 00:40:13.646 [2024-11-20 18:07:13.425826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.646 [2024-11-20 18:07:13.425923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.646 [2024-11-20 18:07:13.425949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.646 [2024-11-20 18:07:13.425957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.646 [2024-11-20 18:07:13.425964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.646 [2024-11-20 18:07:13.425982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.646 qpair failed and we were unable to recover it. 00:40:13.646 [2024-11-20 18:07:13.435819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.646 [2024-11-20 18:07:13.435866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.646 [2024-11-20 18:07:13.435881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.646 [2024-11-20 18:07:13.435889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.646 [2024-11-20 18:07:13.435895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.646 [2024-11-20 18:07:13.435909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.646 qpair failed and we were unable to recover it. 00:40:13.646 [2024-11-20 18:07:13.445907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.646 [2024-11-20 18:07:13.445969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.646 [2024-11-20 18:07:13.445994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.646 [2024-11-20 18:07:13.446007] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.646 [2024-11-20 18:07:13.446014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.646 [2024-11-20 18:07:13.446033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.646 qpair failed and we were unable to recover it. 00:40:13.646 [2024-11-20 18:07:13.455905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.646 [2024-11-20 18:07:13.455954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.646 [2024-11-20 18:07:13.455969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.646 [2024-11-20 18:07:13.455976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.646 [2024-11-20 18:07:13.455982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.646 [2024-11-20 18:07:13.455996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.646 qpair failed and we were unable to recover it. 00:40:13.646 [2024-11-20 18:07:13.465946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.646 [2024-11-20 18:07:13.466045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.647 [2024-11-20 18:07:13.466059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.647 [2024-11-20 18:07:13.466066] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.647 [2024-11-20 18:07:13.466072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.647 [2024-11-20 18:07:13.466086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.647 qpair failed and we were unable to recover it. 00:40:13.647 [2024-11-20 18:07:13.475902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.647 [2024-11-20 18:07:13.475948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.647 [2024-11-20 18:07:13.475962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.647 [2024-11-20 18:07:13.475968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.647 [2024-11-20 18:07:13.475975] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.647 [2024-11-20 18:07:13.475988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.647 qpair failed and we were unable to recover it. 00:40:13.647 [2024-11-20 18:07:13.486020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.647 [2024-11-20 18:07:13.486078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.647 [2024-11-20 18:07:13.486090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.647 [2024-11-20 18:07:13.486097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.647 [2024-11-20 18:07:13.486103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.647 [2024-11-20 18:07:13.486117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.647 qpair failed and we were unable to recover it. 00:40:13.647 [2024-11-20 18:07:13.496014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.647 [2024-11-20 18:07:13.496065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.647 [2024-11-20 18:07:13.496078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.647 [2024-11-20 18:07:13.496084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.647 [2024-11-20 18:07:13.496091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.647 [2024-11-20 18:07:13.496104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.647 qpair failed and we were unable to recover it. 00:40:13.647 [2024-11-20 18:07:13.505958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.647 [2024-11-20 18:07:13.506011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.647 [2024-11-20 18:07:13.506026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.647 [2024-11-20 18:07:13.506033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.647 [2024-11-20 18:07:13.506039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.647 [2024-11-20 18:07:13.506053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.647 qpair failed and we were unable to recover it. 00:40:13.647 [2024-11-20 18:07:13.516054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.647 [2024-11-20 18:07:13.516145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.647 [2024-11-20 18:07:13.516161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.647 [2024-11-20 18:07:13.516169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.647 [2024-11-20 18:07:13.516175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.647 [2024-11-20 18:07:13.516189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.647 qpair failed and we were unable to recover it. 00:40:13.647 [2024-11-20 18:07:13.526127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.647 [2024-11-20 18:07:13.526191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.647 [2024-11-20 18:07:13.526204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.647 [2024-11-20 18:07:13.526211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.647 [2024-11-20 18:07:13.526217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.647 [2024-11-20 18:07:13.526231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.647 qpair failed and we were unable to recover it. 00:40:13.647 [2024-11-20 18:07:13.536124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.647 [2024-11-20 18:07:13.536176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.647 [2024-11-20 18:07:13.536189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.647 [2024-11-20 18:07:13.536200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.647 [2024-11-20 18:07:13.536207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.647 [2024-11-20 18:07:13.536220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.647 qpair failed and we were unable to recover it. 00:40:13.647 [2024-11-20 18:07:13.546194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.647 [2024-11-20 18:07:13.546293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.647 [2024-11-20 18:07:13.546307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.647 [2024-11-20 18:07:13.546313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.647 [2024-11-20 18:07:13.546320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.647 [2024-11-20 18:07:13.546334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.647 qpair failed and we were unable to recover it. 00:40:13.647 [2024-11-20 18:07:13.556171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.647 [2024-11-20 18:07:13.556216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.647 [2024-11-20 18:07:13.556229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.647 [2024-11-20 18:07:13.556236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.647 [2024-11-20 18:07:13.556242] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.647 [2024-11-20 18:07:13.556255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.647 qpair failed and we were unable to recover it. 00:40:13.909 [2024-11-20 18:07:13.566232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.909 [2024-11-20 18:07:13.566317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.909 [2024-11-20 18:07:13.566330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.909 [2024-11-20 18:07:13.566337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.909 [2024-11-20 18:07:13.566344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.909 [2024-11-20 18:07:13.566357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.909 qpair failed and we were unable to recover it. 00:40:13.909 [2024-11-20 18:07:13.576217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.909 [2024-11-20 18:07:13.576268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.909 [2024-11-20 18:07:13.576281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.909 [2024-11-20 18:07:13.576288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.909 [2024-11-20 18:07:13.576294] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.909 [2024-11-20 18:07:13.576308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.909 qpair failed and we were unable to recover it. 00:40:13.909 [2024-11-20 18:07:13.586259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.909 [2024-11-20 18:07:13.586308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.909 [2024-11-20 18:07:13.586321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.909 [2024-11-20 18:07:13.586328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.909 [2024-11-20 18:07:13.586334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.909 [2024-11-20 18:07:13.586347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.909 qpair failed and we were unable to recover it. 00:40:13.909 [2024-11-20 18:07:13.596261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.909 [2024-11-20 18:07:13.596303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.909 [2024-11-20 18:07:13.596316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.909 [2024-11-20 18:07:13.596322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.909 [2024-11-20 18:07:13.596329] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.909 [2024-11-20 18:07:13.596342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.909 qpair failed and we were unable to recover it. 00:40:13.909 [2024-11-20 18:07:13.606225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.909 [2024-11-20 18:07:13.606286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.909 [2024-11-20 18:07:13.606300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.909 [2024-11-20 18:07:13.606307] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.909 [2024-11-20 18:07:13.606313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.909 [2024-11-20 18:07:13.606330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.909 qpair failed and we were unable to recover it. 00:40:13.909 [2024-11-20 18:07:13.616326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.909 [2024-11-20 18:07:13.616375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.909 [2024-11-20 18:07:13.616389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.909 [2024-11-20 18:07:13.616396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.909 [2024-11-20 18:07:13.616402] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.909 [2024-11-20 18:07:13.616416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.909 qpair failed and we were unable to recover it. 00:40:13.909 [2024-11-20 18:07:13.626361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.909 [2024-11-20 18:07:13.626411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.909 [2024-11-20 18:07:13.626426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.909 [2024-11-20 18:07:13.626436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.909 [2024-11-20 18:07:13.626442] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.909 [2024-11-20 18:07:13.626455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.909 qpair failed and we were unable to recover it. 00:40:13.909 [2024-11-20 18:07:13.636440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.909 [2024-11-20 18:07:13.636486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.909 [2024-11-20 18:07:13.636500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.909 [2024-11-20 18:07:13.636506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.909 [2024-11-20 18:07:13.636513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.909 [2024-11-20 18:07:13.636526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.909 qpair failed and we were unable to recover it. 00:40:13.909 [2024-11-20 18:07:13.646454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.909 [2024-11-20 18:07:13.646512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.909 [2024-11-20 18:07:13.646524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.910 [2024-11-20 18:07:13.646531] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.910 [2024-11-20 18:07:13.646537] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.910 [2024-11-20 18:07:13.646550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.910 qpair failed and we were unable to recover it. 00:40:13.910 [2024-11-20 18:07:13.656443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.910 [2024-11-20 18:07:13.656492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.910 [2024-11-20 18:07:13.656504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.910 [2024-11-20 18:07:13.656511] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.910 [2024-11-20 18:07:13.656517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.910 [2024-11-20 18:07:13.656530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.910 qpair failed and we were unable to recover it. 00:40:13.910 [2024-11-20 18:07:13.666528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.910 [2024-11-20 18:07:13.666579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.910 [2024-11-20 18:07:13.666592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.910 [2024-11-20 18:07:13.666599] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.910 [2024-11-20 18:07:13.666605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.910 [2024-11-20 18:07:13.666618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.910 qpair failed and we were unable to recover it. 00:40:13.910 [2024-11-20 18:07:13.676483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.910 [2024-11-20 18:07:13.676573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.910 [2024-11-20 18:07:13.676586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.910 [2024-11-20 18:07:13.676593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.910 [2024-11-20 18:07:13.676599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.910 [2024-11-20 18:07:13.676612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.910 qpair failed and we were unable to recover it. 00:40:13.910 [2024-11-20 18:07:13.686539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.910 [2024-11-20 18:07:13.686621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.910 [2024-11-20 18:07:13.686634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.910 [2024-11-20 18:07:13.686640] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.910 [2024-11-20 18:07:13.686646] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.910 [2024-11-20 18:07:13.686660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.910 qpair failed and we were unable to recover it. 00:40:13.910 [2024-11-20 18:07:13.696549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.910 [2024-11-20 18:07:13.696602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.910 [2024-11-20 18:07:13.696616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.910 [2024-11-20 18:07:13.696623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.910 [2024-11-20 18:07:13.696629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.910 [2024-11-20 18:07:13.696643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.910 qpair failed and we were unable to recover it. 00:40:13.910 [2024-11-20 18:07:13.706610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.910 [2024-11-20 18:07:13.706659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.910 [2024-11-20 18:07:13.706672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.910 [2024-11-20 18:07:13.706679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.910 [2024-11-20 18:07:13.706685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.910 [2024-11-20 18:07:13.706699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.910 qpair failed and we were unable to recover it. 00:40:13.910 [2024-11-20 18:07:13.716590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.910 [2024-11-20 18:07:13.716644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.910 [2024-11-20 18:07:13.716660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.910 [2024-11-20 18:07:13.716667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.910 [2024-11-20 18:07:13.716674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.910 [2024-11-20 18:07:13.716687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.910 qpair failed and we were unable to recover it. 00:40:13.910 [2024-11-20 18:07:13.726661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.910 [2024-11-20 18:07:13.726713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.910 [2024-11-20 18:07:13.726726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.910 [2024-11-20 18:07:13.726733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.910 [2024-11-20 18:07:13.726739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.910 [2024-11-20 18:07:13.726753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.910 qpair failed and we were unable to recover it. 00:40:13.910 [2024-11-20 18:07:13.736684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.910 [2024-11-20 18:07:13.736734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.910 [2024-11-20 18:07:13.736747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.911 [2024-11-20 18:07:13.736753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.911 [2024-11-20 18:07:13.736760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.911 [2024-11-20 18:07:13.736773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.911 qpair failed and we were unable to recover it. 00:40:13.911 [2024-11-20 18:07:13.746676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.911 [2024-11-20 18:07:13.746731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.911 [2024-11-20 18:07:13.746744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.911 [2024-11-20 18:07:13.746750] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.911 [2024-11-20 18:07:13.746756] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.911 [2024-11-20 18:07:13.746769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.911 qpair failed and we were unable to recover it. 00:40:13.911 [2024-11-20 18:07:13.756701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.911 [2024-11-20 18:07:13.756750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.911 [2024-11-20 18:07:13.756763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.911 [2024-11-20 18:07:13.756770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.911 [2024-11-20 18:07:13.756776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.911 [2024-11-20 18:07:13.756790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.911 qpair failed and we were unable to recover it. 00:40:13.911 [2024-11-20 18:07:13.766777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.911 [2024-11-20 18:07:13.766831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.911 [2024-11-20 18:07:13.766844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.911 [2024-11-20 18:07:13.766851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.911 [2024-11-20 18:07:13.766856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.911 [2024-11-20 18:07:13.766869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.911 qpair failed and we were unable to recover it. 00:40:13.911 [2024-11-20 18:07:13.776770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.911 [2024-11-20 18:07:13.776833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.911 [2024-11-20 18:07:13.776846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.911 [2024-11-20 18:07:13.776852] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.911 [2024-11-20 18:07:13.776859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.911 [2024-11-20 18:07:13.776872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.911 qpair failed and we were unable to recover it. 00:40:13.911 [2024-11-20 18:07:13.786812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.911 [2024-11-20 18:07:13.786866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.911 [2024-11-20 18:07:13.786878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.911 [2024-11-20 18:07:13.786885] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.911 [2024-11-20 18:07:13.786891] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.911 [2024-11-20 18:07:13.786905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.911 qpair failed and we were unable to recover it. 00:40:13.911 [2024-11-20 18:07:13.796798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.911 [2024-11-20 18:07:13.796850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.911 [2024-11-20 18:07:13.796874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.911 [2024-11-20 18:07:13.796882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.911 [2024-11-20 18:07:13.796889] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.911 [2024-11-20 18:07:13.796908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.911 qpair failed and we were unable to recover it. 00:40:13.911 [2024-11-20 18:07:13.806866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.911 [2024-11-20 18:07:13.806925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.911 [2024-11-20 18:07:13.806955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.911 [2024-11-20 18:07:13.806964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.911 [2024-11-20 18:07:13.806971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.911 [2024-11-20 18:07:13.806990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.911 qpair failed and we were unable to recover it. 00:40:13.911 [2024-11-20 18:07:13.816877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:13.911 [2024-11-20 18:07:13.816937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:13.911 [2024-11-20 18:07:13.816962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:13.911 [2024-11-20 18:07:13.816970] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:13.911 [2024-11-20 18:07:13.816977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:13.911 [2024-11-20 18:07:13.816995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:13.911 qpair failed and we were unable to recover it. 00:40:14.174 [2024-11-20 18:07:13.826925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.174 [2024-11-20 18:07:13.826987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.174 [2024-11-20 18:07:13.827013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.174 [2024-11-20 18:07:13.827024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.174 [2024-11-20 18:07:13.827032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.174 [2024-11-20 18:07:13.827050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.174 qpair failed and we were unable to recover it. 00:40:14.174 [2024-11-20 18:07:13.836887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.174 [2024-11-20 18:07:13.836934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.174 [2024-11-20 18:07:13.836950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.174 [2024-11-20 18:07:13.836957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.174 [2024-11-20 18:07:13.836963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.174 [2024-11-20 18:07:13.836978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.174 qpair failed and we were unable to recover it. 00:40:14.174 [2024-11-20 18:07:13.846998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.174 [2024-11-20 18:07:13.847057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.174 [2024-11-20 18:07:13.847071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.174 [2024-11-20 18:07:13.847078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.174 [2024-11-20 18:07:13.847084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.174 [2024-11-20 18:07:13.847098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.174 qpair failed and we were unable to recover it. 00:40:14.174 [2024-11-20 18:07:13.856964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.174 [2024-11-20 18:07:13.857015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.174 [2024-11-20 18:07:13.857028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.174 [2024-11-20 18:07:13.857035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.174 [2024-11-20 18:07:13.857041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.174 [2024-11-20 18:07:13.857054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.174 qpair failed and we were unable to recover it. 00:40:14.174 [2024-11-20 18:07:13.867011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.174 [2024-11-20 18:07:13.867062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.174 [2024-11-20 18:07:13.867075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.174 [2024-11-20 18:07:13.867082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.174 [2024-11-20 18:07:13.867088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.174 [2024-11-20 18:07:13.867101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.174 qpair failed and we were unable to recover it. 00:40:14.174 [2024-11-20 18:07:13.877008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.174 [2024-11-20 18:07:13.877056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.174 [2024-11-20 18:07:13.877070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.174 [2024-11-20 18:07:13.877076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.174 [2024-11-20 18:07:13.877083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.174 [2024-11-20 18:07:13.877096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.174 qpair failed and we were unable to recover it. 00:40:14.174 [2024-11-20 18:07:13.887102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.174 [2024-11-20 18:07:13.887162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.174 [2024-11-20 18:07:13.887175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.174 [2024-11-20 18:07:13.887182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.174 [2024-11-20 18:07:13.887188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.174 [2024-11-20 18:07:13.887202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.174 qpair failed and we were unable to recover it. 00:40:14.174 [2024-11-20 18:07:13.897088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.174 [2024-11-20 18:07:13.897137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.174 [2024-11-20 18:07:13.897153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.174 [2024-11-20 18:07:13.897164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.174 [2024-11-20 18:07:13.897171] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.174 [2024-11-20 18:07:13.897184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.174 qpair failed and we were unable to recover it. 00:40:14.174 [2024-11-20 18:07:13.907147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.174 [2024-11-20 18:07:13.907211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.174 [2024-11-20 18:07:13.907224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.174 [2024-11-20 18:07:13.907231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.174 [2024-11-20 18:07:13.907237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.174 [2024-11-20 18:07:13.907250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.174 qpair failed and we were unable to recover it. 00:40:14.174 [2024-11-20 18:07:13.917134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.174 [2024-11-20 18:07:13.917180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.174 [2024-11-20 18:07:13.917193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.174 [2024-11-20 18:07:13.917200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.174 [2024-11-20 18:07:13.917206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.174 [2024-11-20 18:07:13.917220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.174 qpair failed and we were unable to recover it. 00:40:14.174 [2024-11-20 18:07:13.927178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.174 [2024-11-20 18:07:13.927233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.174 [2024-11-20 18:07:13.927247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.174 [2024-11-20 18:07:13.927254] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.174 [2024-11-20 18:07:13.927260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.174 [2024-11-20 18:07:13.927273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.174 qpair failed and we were unable to recover it. 00:40:14.174 [2024-11-20 18:07:13.937174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.174 [2024-11-20 18:07:13.937225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.174 [2024-11-20 18:07:13.937237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.174 [2024-11-20 18:07:13.937244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.174 [2024-11-20 18:07:13.937250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.174 [2024-11-20 18:07:13.937267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.174 qpair failed and we were unable to recover it. 00:40:14.174 [2024-11-20 18:07:13.947181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.174 [2024-11-20 18:07:13.947231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.174 [2024-11-20 18:07:13.947244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.174 [2024-11-20 18:07:13.947251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.174 [2024-11-20 18:07:13.947257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.175 [2024-11-20 18:07:13.947271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.175 qpair failed and we were unable to recover it. 00:40:14.175 [2024-11-20 18:07:13.957253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.175 [2024-11-20 18:07:13.957307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.175 [2024-11-20 18:07:13.957319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.175 [2024-11-20 18:07:13.957326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.175 [2024-11-20 18:07:13.957332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.175 [2024-11-20 18:07:13.957345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.175 qpair failed and we were unable to recover it. 00:40:14.175 [2024-11-20 18:07:13.967306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.175 [2024-11-20 18:07:13.967360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.175 [2024-11-20 18:07:13.967373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.175 [2024-11-20 18:07:13.967380] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.175 [2024-11-20 18:07:13.967386] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.175 [2024-11-20 18:07:13.967399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.175 qpair failed and we were unable to recover it. 00:40:14.175 [2024-11-20 18:07:13.977197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.175 [2024-11-20 18:07:13.977250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.175 [2024-11-20 18:07:13.977264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.175 [2024-11-20 18:07:13.977271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.175 [2024-11-20 18:07:13.977278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.175 [2024-11-20 18:07:13.977292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.175 qpair failed and we were unable to recover it. 00:40:14.175 [2024-11-20 18:07:13.987355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.175 [2024-11-20 18:07:13.987408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.175 [2024-11-20 18:07:13.987425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.175 [2024-11-20 18:07:13.987431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.175 [2024-11-20 18:07:13.987437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.175 [2024-11-20 18:07:13.987451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.175 qpair failed and we were unable to recover it. 00:40:14.175 [2024-11-20 18:07:13.997331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.175 [2024-11-20 18:07:13.997379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.175 [2024-11-20 18:07:13.997392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.175 [2024-11-20 18:07:13.997398] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.175 [2024-11-20 18:07:13.997404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.175 [2024-11-20 18:07:13.997418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.175 qpair failed and we were unable to recover it. 00:40:14.175 [2024-11-20 18:07:14.007450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.175 [2024-11-20 18:07:14.007505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.175 [2024-11-20 18:07:14.007519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.175 [2024-11-20 18:07:14.007526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.175 [2024-11-20 18:07:14.007533] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.175 [2024-11-20 18:07:14.007546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.175 qpair failed and we were unable to recover it. 00:40:14.175 [2024-11-20 18:07:14.017444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.175 [2024-11-20 18:07:14.017493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.175 [2024-11-20 18:07:14.017507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.175 [2024-11-20 18:07:14.017513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.175 [2024-11-20 18:07:14.017520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.175 [2024-11-20 18:07:14.017533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.175 qpair failed and we were unable to recover it. 00:40:14.175 [2024-11-20 18:07:14.027465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.175 [2024-11-20 18:07:14.027516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.175 [2024-11-20 18:07:14.027529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.175 [2024-11-20 18:07:14.027536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.175 [2024-11-20 18:07:14.027542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.175 [2024-11-20 18:07:14.027559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.175 qpair failed and we were unable to recover it. 00:40:14.175 [2024-11-20 18:07:14.037489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.175 [2024-11-20 18:07:14.037535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.175 [2024-11-20 18:07:14.037548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.175 [2024-11-20 18:07:14.037554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.175 [2024-11-20 18:07:14.037561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.175 [2024-11-20 18:07:14.037574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.175 qpair failed and we were unable to recover it. 00:40:14.175 [2024-11-20 18:07:14.047550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.175 [2024-11-20 18:07:14.047642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.175 [2024-11-20 18:07:14.047654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.175 [2024-11-20 18:07:14.047661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.175 [2024-11-20 18:07:14.047667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.175 [2024-11-20 18:07:14.047680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.175 qpair failed and we were unable to recover it. 00:40:14.175 [2024-11-20 18:07:14.057511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.175 [2024-11-20 18:07:14.057559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.175 [2024-11-20 18:07:14.057572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.175 [2024-11-20 18:07:14.057579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.175 [2024-11-20 18:07:14.057585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.175 [2024-11-20 18:07:14.057598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.175 qpair failed and we were unable to recover it. 00:40:14.175 [2024-11-20 18:07:14.067603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.175 [2024-11-20 18:07:14.067651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.175 [2024-11-20 18:07:14.067664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.175 [2024-11-20 18:07:14.067671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.175 [2024-11-20 18:07:14.067677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.175 [2024-11-20 18:07:14.067690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.175 qpair failed and we were unable to recover it. 00:40:14.175 [2024-11-20 18:07:14.077589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.175 [2024-11-20 18:07:14.077642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.175 [2024-11-20 18:07:14.077658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.175 [2024-11-20 18:07:14.077665] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.175 [2024-11-20 18:07:14.077671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.175 [2024-11-20 18:07:14.077685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.175 qpair failed and we were unable to recover it. 00:40:14.437 [2024-11-20 18:07:14.087674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.437 [2024-11-20 18:07:14.087730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.437 [2024-11-20 18:07:14.087743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.437 [2024-11-20 18:07:14.087750] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.437 [2024-11-20 18:07:14.087757] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.437 [2024-11-20 18:07:14.087769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.437 qpair failed and we were unable to recover it. 00:40:14.437 [2024-11-20 18:07:14.097630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.437 [2024-11-20 18:07:14.097675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.437 [2024-11-20 18:07:14.097688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.437 [2024-11-20 18:07:14.097695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.437 [2024-11-20 18:07:14.097701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.437 [2024-11-20 18:07:14.097714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.437 qpair failed and we were unable to recover it. 00:40:14.437 [2024-11-20 18:07:14.107692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.437 [2024-11-20 18:07:14.107744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.437 [2024-11-20 18:07:14.107757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.437 [2024-11-20 18:07:14.107764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.437 [2024-11-20 18:07:14.107770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.437 [2024-11-20 18:07:14.107783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.437 qpair failed and we were unable to recover it. 00:40:14.437 [2024-11-20 18:07:14.117705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.437 [2024-11-20 18:07:14.117750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.437 [2024-11-20 18:07:14.117763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.437 [2024-11-20 18:07:14.117770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.437 [2024-11-20 18:07:14.117776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.437 [2024-11-20 18:07:14.117792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.437 qpair failed and we were unable to recover it. 00:40:14.437 [2024-11-20 18:07:14.127777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.437 [2024-11-20 18:07:14.127833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.437 [2024-11-20 18:07:14.127847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.437 [2024-11-20 18:07:14.127856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.437 [2024-11-20 18:07:14.127864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.437 [2024-11-20 18:07:14.127877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.437 qpair failed and we were unable to recover it. 00:40:14.437 [2024-11-20 18:07:14.137644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.438 [2024-11-20 18:07:14.137697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.438 [2024-11-20 18:07:14.137710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.438 [2024-11-20 18:07:14.137717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.438 [2024-11-20 18:07:14.137723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.438 [2024-11-20 18:07:14.137736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.438 qpair failed and we were unable to recover it. 00:40:14.438 [2024-11-20 18:07:14.147829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.438 [2024-11-20 18:07:14.147934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.438 [2024-11-20 18:07:14.147948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.438 [2024-11-20 18:07:14.147955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.438 [2024-11-20 18:07:14.147962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.438 [2024-11-20 18:07:14.147979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.438 qpair failed and we were unable to recover it. 00:40:14.438 [2024-11-20 18:07:14.157789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.438 [2024-11-20 18:07:14.157837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.438 [2024-11-20 18:07:14.157850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.438 [2024-11-20 18:07:14.157857] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.438 [2024-11-20 18:07:14.157863] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.438 [2024-11-20 18:07:14.157877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.438 qpair failed and we were unable to recover it. 00:40:14.438 [2024-11-20 18:07:14.167878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.438 [2024-11-20 18:07:14.167932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.438 [2024-11-20 18:07:14.167949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.438 [2024-11-20 18:07:14.167955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.438 [2024-11-20 18:07:14.167961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.438 [2024-11-20 18:07:14.167975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.438 qpair failed and we were unable to recover it. 00:40:14.438 [2024-11-20 18:07:14.177759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.438 [2024-11-20 18:07:14.177807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.438 [2024-11-20 18:07:14.177820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.438 [2024-11-20 18:07:14.177828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.438 [2024-11-20 18:07:14.177835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.438 [2024-11-20 18:07:14.177849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.438 qpair failed and we were unable to recover it. 00:40:14.438 [2024-11-20 18:07:14.187935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.438 [2024-11-20 18:07:14.188021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.438 [2024-11-20 18:07:14.188035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.438 [2024-11-20 18:07:14.188041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.438 [2024-11-20 18:07:14.188047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.438 [2024-11-20 18:07:14.188060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.438 qpair failed and we were unable to recover it. 00:40:14.438 [2024-11-20 18:07:14.197921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.438 [2024-11-20 18:07:14.197971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.438 [2024-11-20 18:07:14.197997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.438 [2024-11-20 18:07:14.198005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.438 [2024-11-20 18:07:14.198013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.438 [2024-11-20 18:07:14.198032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.438 qpair failed and we were unable to recover it. 00:40:14.438 [2024-11-20 18:07:14.207970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.438 [2024-11-20 18:07:14.208027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.438 [2024-11-20 18:07:14.208042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.438 [2024-11-20 18:07:14.208049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.438 [2024-11-20 18:07:14.208055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.438 [2024-11-20 18:07:14.208075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.438 qpair failed and we were unable to recover it. 00:40:14.438 [2024-11-20 18:07:14.217996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.438 [2024-11-20 18:07:14.218045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.438 [2024-11-20 18:07:14.218059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.438 [2024-11-20 18:07:14.218066] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.438 [2024-11-20 18:07:14.218072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.438 [2024-11-20 18:07:14.218086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.438 qpair failed and we were unable to recover it. 00:40:14.438 [2024-11-20 18:07:14.228019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.438 [2024-11-20 18:07:14.228073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.438 [2024-11-20 18:07:14.228087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.438 [2024-11-20 18:07:14.228093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.438 [2024-11-20 18:07:14.228100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.438 [2024-11-20 18:07:14.228114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.438 qpair failed and we were unable to recover it. 00:40:14.438 [2024-11-20 18:07:14.238047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.438 [2024-11-20 18:07:14.238093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.438 [2024-11-20 18:07:14.238107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.438 [2024-11-20 18:07:14.238114] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.438 [2024-11-20 18:07:14.238120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.438 [2024-11-20 18:07:14.238134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.438 qpair failed and we were unable to recover it. 00:40:14.438 [2024-11-20 18:07:14.248050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.438 [2024-11-20 18:07:14.248106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.438 [2024-11-20 18:07:14.248119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.438 [2024-11-20 18:07:14.248126] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.438 [2024-11-20 18:07:14.248133] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.438 [2024-11-20 18:07:14.248146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.438 qpair failed and we were unable to recover it. 00:40:14.438 [2024-11-20 18:07:14.258093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.438 [2024-11-20 18:07:14.258139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.438 [2024-11-20 18:07:14.258156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.438 [2024-11-20 18:07:14.258169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.438 [2024-11-20 18:07:14.258175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.438 [2024-11-20 18:07:14.258188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.438 qpair failed and we were unable to recover it. 00:40:14.438 [2024-11-20 18:07:14.268130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.438 [2024-11-20 18:07:14.268191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.439 [2024-11-20 18:07:14.268204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.439 [2024-11-20 18:07:14.268211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.439 [2024-11-20 18:07:14.268217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.439 [2024-11-20 18:07:14.268231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.439 qpair failed and we were unable to recover it. 00:40:14.439 [2024-11-20 18:07:14.278132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.439 [2024-11-20 18:07:14.278183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.439 [2024-11-20 18:07:14.278196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.439 [2024-11-20 18:07:14.278203] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.439 [2024-11-20 18:07:14.278209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.439 [2024-11-20 18:07:14.278222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.439 qpair failed and we were unable to recover it. 00:40:14.439 [2024-11-20 18:07:14.288191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.439 [2024-11-20 18:07:14.288246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.439 [2024-11-20 18:07:14.288259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.439 [2024-11-20 18:07:14.288265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.439 [2024-11-20 18:07:14.288271] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.439 [2024-11-20 18:07:14.288285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.439 qpair failed and we were unable to recover it. 00:40:14.439 [2024-11-20 18:07:14.298219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.439 [2024-11-20 18:07:14.298270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.439 [2024-11-20 18:07:14.298284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.439 [2024-11-20 18:07:14.298291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.439 [2024-11-20 18:07:14.298297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.439 [2024-11-20 18:07:14.298314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.439 qpair failed and we were unable to recover it. 00:40:14.439 [2024-11-20 18:07:14.308239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.439 [2024-11-20 18:07:14.308292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.439 [2024-11-20 18:07:14.308304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.439 [2024-11-20 18:07:14.308311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.439 [2024-11-20 18:07:14.308317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.439 [2024-11-20 18:07:14.308330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.439 qpair failed and we were unable to recover it. 00:40:14.439 [2024-11-20 18:07:14.318250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.439 [2024-11-20 18:07:14.318293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.439 [2024-11-20 18:07:14.318306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.439 [2024-11-20 18:07:14.318312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.439 [2024-11-20 18:07:14.318318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.439 [2024-11-20 18:07:14.318332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.439 qpair failed and we were unable to recover it. 00:40:14.439 [2024-11-20 18:07:14.328234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.439 [2024-11-20 18:07:14.328330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.439 [2024-11-20 18:07:14.328343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.439 [2024-11-20 18:07:14.328350] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.439 [2024-11-20 18:07:14.328356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.439 [2024-11-20 18:07:14.328369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.439 qpair failed and we were unable to recover it. 00:40:14.439 [2024-11-20 18:07:14.338311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.439 [2024-11-20 18:07:14.338357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.439 [2024-11-20 18:07:14.338370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.439 [2024-11-20 18:07:14.338377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.439 [2024-11-20 18:07:14.338383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.439 [2024-11-20 18:07:14.338396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.439 qpair failed and we were unable to recover it. 00:40:14.439 [2024-11-20 18:07:14.348400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.439 [2024-11-20 18:07:14.348454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.439 [2024-11-20 18:07:14.348469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.439 [2024-11-20 18:07:14.348476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.439 [2024-11-20 18:07:14.348483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.439 [2024-11-20 18:07:14.348496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.439 qpair failed and we were unable to recover it. 00:40:14.702 [2024-11-20 18:07:14.358391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.702 [2024-11-20 18:07:14.358442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.702 [2024-11-20 18:07:14.358454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.702 [2024-11-20 18:07:14.358461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.702 [2024-11-20 18:07:14.358467] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.702 [2024-11-20 18:07:14.358480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.702 qpair failed and we were unable to recover it. 00:40:14.702 [2024-11-20 18:07:14.368453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.702 [2024-11-20 18:07:14.368508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.702 [2024-11-20 18:07:14.368521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.702 [2024-11-20 18:07:14.368528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.702 [2024-11-20 18:07:14.368534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.702 [2024-11-20 18:07:14.368547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.702 qpair failed and we were unable to recover it. 00:40:14.702 [2024-11-20 18:07:14.378424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.702 [2024-11-20 18:07:14.378498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.702 [2024-11-20 18:07:14.378511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.702 [2024-11-20 18:07:14.378517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.702 [2024-11-20 18:07:14.378523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.702 [2024-11-20 18:07:14.378537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.702 qpair failed and we were unable to recover it. 00:40:14.702 [2024-11-20 18:07:14.388502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.702 [2024-11-20 18:07:14.388573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.702 [2024-11-20 18:07:14.388586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.702 [2024-11-20 18:07:14.388592] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.702 [2024-11-20 18:07:14.388602] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.702 [2024-11-20 18:07:14.388615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.702 qpair failed and we were unable to recover it. 00:40:14.702 [2024-11-20 18:07:14.398487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.702 [2024-11-20 18:07:14.398534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.702 [2024-11-20 18:07:14.398548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.702 [2024-11-20 18:07:14.398554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.702 [2024-11-20 18:07:14.398560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.702 [2024-11-20 18:07:14.398573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.702 qpair failed and we were unable to recover it. 00:40:14.702 [2024-11-20 18:07:14.408538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.702 [2024-11-20 18:07:14.408590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.702 [2024-11-20 18:07:14.408603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.702 [2024-11-20 18:07:14.408609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.702 [2024-11-20 18:07:14.408615] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.702 [2024-11-20 18:07:14.408629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.702 qpair failed and we were unable to recover it. 00:40:14.702 [2024-11-20 18:07:14.418540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.702 [2024-11-20 18:07:14.418589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.702 [2024-11-20 18:07:14.418601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.702 [2024-11-20 18:07:14.418608] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.702 [2024-11-20 18:07:14.418614] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.702 [2024-11-20 18:07:14.418627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.702 qpair failed and we were unable to recover it. 00:40:14.702 [2024-11-20 18:07:14.428602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.702 [2024-11-20 18:07:14.428688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.702 [2024-11-20 18:07:14.428701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.702 [2024-11-20 18:07:14.428707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.702 [2024-11-20 18:07:14.428713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.702 [2024-11-20 18:07:14.428727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.702 qpair failed and we were unable to recover it. 00:40:14.702 [2024-11-20 18:07:14.438583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.702 [2024-11-20 18:07:14.438628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.702 [2024-11-20 18:07:14.438648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.702 [2024-11-20 18:07:14.438655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.702 [2024-11-20 18:07:14.438661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.702 [2024-11-20 18:07:14.438675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.702 qpair failed and we were unable to recover it. 00:40:14.702 [2024-11-20 18:07:14.448657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.702 [2024-11-20 18:07:14.448745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.702 [2024-11-20 18:07:14.448758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.702 [2024-11-20 18:07:14.448765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.702 [2024-11-20 18:07:14.448771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.702 [2024-11-20 18:07:14.448784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.702 qpair failed and we were unable to recover it. 00:40:14.702 [2024-11-20 18:07:14.458644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.702 [2024-11-20 18:07:14.458694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.702 [2024-11-20 18:07:14.458707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.702 [2024-11-20 18:07:14.458714] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.702 [2024-11-20 18:07:14.458720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.702 [2024-11-20 18:07:14.458733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.702 qpair failed and we were unable to recover it. 00:40:14.702 [2024-11-20 18:07:14.468756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.702 [2024-11-20 18:07:14.468810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.702 [2024-11-20 18:07:14.468823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.702 [2024-11-20 18:07:14.468829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.702 [2024-11-20 18:07:14.468836] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.702 [2024-11-20 18:07:14.468849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.702 qpair failed and we were unable to recover it. 00:40:14.702 [2024-11-20 18:07:14.478699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.702 [2024-11-20 18:07:14.478743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.702 [2024-11-20 18:07:14.478756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.702 [2024-11-20 18:07:14.478763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.702 [2024-11-20 18:07:14.478772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.702 [2024-11-20 18:07:14.478786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.702 qpair failed and we were unable to recover it. 00:40:14.703 [2024-11-20 18:07:14.488779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.703 [2024-11-20 18:07:14.488858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.703 [2024-11-20 18:07:14.488871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.703 [2024-11-20 18:07:14.488878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.703 [2024-11-20 18:07:14.488884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.703 [2024-11-20 18:07:14.488897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.703 qpair failed and we were unable to recover it. 00:40:14.703 [2024-11-20 18:07:14.498634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.703 [2024-11-20 18:07:14.498686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.703 [2024-11-20 18:07:14.498699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.703 [2024-11-20 18:07:14.498706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.703 [2024-11-20 18:07:14.498712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.703 [2024-11-20 18:07:14.498725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.703 qpair failed and we were unable to recover it. 00:40:14.703 [2024-11-20 18:07:14.508810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.703 [2024-11-20 18:07:14.508888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.703 [2024-11-20 18:07:14.508903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.703 [2024-11-20 18:07:14.508910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.703 [2024-11-20 18:07:14.508916] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.703 [2024-11-20 18:07:14.508930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.703 qpair failed and we were unable to recover it. 00:40:14.703 [2024-11-20 18:07:14.518788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.703 [2024-11-20 18:07:14.518835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.703 [2024-11-20 18:07:14.518848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.703 [2024-11-20 18:07:14.518854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.703 [2024-11-20 18:07:14.518861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.703 [2024-11-20 18:07:14.518874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.703 qpair failed and we were unable to recover it. 00:40:14.703 [2024-11-20 18:07:14.528847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.703 [2024-11-20 18:07:14.528913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.703 [2024-11-20 18:07:14.528938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.703 [2024-11-20 18:07:14.528947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.703 [2024-11-20 18:07:14.528954] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.703 [2024-11-20 18:07:14.528972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.703 qpair failed and we were unable to recover it. 00:40:14.703 [2024-11-20 18:07:14.538871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.703 [2024-11-20 18:07:14.538925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.703 [2024-11-20 18:07:14.538949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.703 [2024-11-20 18:07:14.538958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.703 [2024-11-20 18:07:14.538964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.703 [2024-11-20 18:07:14.538983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.703 qpair failed and we were unable to recover it. 00:40:14.703 [2024-11-20 18:07:14.548874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.703 [2024-11-20 18:07:14.548929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.703 [2024-11-20 18:07:14.548955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.703 [2024-11-20 18:07:14.548963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.703 [2024-11-20 18:07:14.548970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.703 [2024-11-20 18:07:14.548988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.703 qpair failed and we were unable to recover it. 00:40:14.703 [2024-11-20 18:07:14.558907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.703 [2024-11-20 18:07:14.558958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.703 [2024-11-20 18:07:14.558973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.703 [2024-11-20 18:07:14.558980] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.703 [2024-11-20 18:07:14.558986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.703 [2024-11-20 18:07:14.559001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.703 qpair failed and we were unable to recover it. 00:40:14.703 [2024-11-20 18:07:14.568994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.703 [2024-11-20 18:07:14.569056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.703 [2024-11-20 18:07:14.569070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.703 [2024-11-20 18:07:14.569077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.703 [2024-11-20 18:07:14.569088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.703 [2024-11-20 18:07:14.569101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.703 qpair failed and we were unable to recover it. 00:40:14.703 [2024-11-20 18:07:14.578952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.703 [2024-11-20 18:07:14.578999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.703 [2024-11-20 18:07:14.579012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.703 [2024-11-20 18:07:14.579019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.703 [2024-11-20 18:07:14.579025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.703 [2024-11-20 18:07:14.579038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.703 qpair failed and we were unable to recover it. 00:40:14.703 [2024-11-20 18:07:14.588881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.703 [2024-11-20 18:07:14.588934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.703 [2024-11-20 18:07:14.588948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.703 [2024-11-20 18:07:14.588955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.703 [2024-11-20 18:07:14.588961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.703 [2024-11-20 18:07:14.588975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.703 qpair failed and we were unable to recover it. 00:40:14.703 [2024-11-20 18:07:14.598951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.703 [2024-11-20 18:07:14.599003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.703 [2024-11-20 18:07:14.599016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.703 [2024-11-20 18:07:14.599023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.703 [2024-11-20 18:07:14.599029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.703 [2024-11-20 18:07:14.599042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.703 qpair failed and we were unable to recover it. 00:40:14.703 [2024-11-20 18:07:14.609058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.703 [2024-11-20 18:07:14.609113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.703 [2024-11-20 18:07:14.609126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.703 [2024-11-20 18:07:14.609133] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.703 [2024-11-20 18:07:14.609139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.703 [2024-11-20 18:07:14.609153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.703 qpair failed and we were unable to recover it. 00:40:14.966 [2024-11-20 18:07:14.619090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.966 [2024-11-20 18:07:14.619144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.966 [2024-11-20 18:07:14.619157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.966 [2024-11-20 18:07:14.619169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.966 [2024-11-20 18:07:14.619175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.966 [2024-11-20 18:07:14.619188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.966 qpair failed and we were unable to recover it. 00:40:14.966 [2024-11-20 18:07:14.629115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.966 [2024-11-20 18:07:14.629168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.966 [2024-11-20 18:07:14.629181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.966 [2024-11-20 18:07:14.629188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.966 [2024-11-20 18:07:14.629194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.966 [2024-11-20 18:07:14.629207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.966 qpair failed and we were unable to recover it. 00:40:14.966 [2024-11-20 18:07:14.639140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.966 [2024-11-20 18:07:14.639193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.966 [2024-11-20 18:07:14.639206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.966 [2024-11-20 18:07:14.639212] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.966 [2024-11-20 18:07:14.639219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.966 [2024-11-20 18:07:14.639232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.966 qpair failed and we were unable to recover it. 00:40:14.966 [2024-11-20 18:07:14.649217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.966 [2024-11-20 18:07:14.649271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.966 [2024-11-20 18:07:14.649284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.966 [2024-11-20 18:07:14.649291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.966 [2024-11-20 18:07:14.649297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.966 [2024-11-20 18:07:14.649311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.966 qpair failed and we were unable to recover it. 00:40:14.966 [2024-11-20 18:07:14.659207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.966 [2024-11-20 18:07:14.659260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.966 [2024-11-20 18:07:14.659274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.966 [2024-11-20 18:07:14.659281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.966 [2024-11-20 18:07:14.659291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.966 [2024-11-20 18:07:14.659305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.966 qpair failed and we were unable to recover it. 00:40:14.966 [2024-11-20 18:07:14.669216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.966 [2024-11-20 18:07:14.669299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.966 [2024-11-20 18:07:14.669312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.966 [2024-11-20 18:07:14.669319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.966 [2024-11-20 18:07:14.669325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.966 [2024-11-20 18:07:14.669338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.966 qpair failed and we were unable to recover it. 00:40:14.966 [2024-11-20 18:07:14.679349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.966 [2024-11-20 18:07:14.679416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.966 [2024-11-20 18:07:14.679429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.966 [2024-11-20 18:07:14.679436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.966 [2024-11-20 18:07:14.679442] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.966 [2024-11-20 18:07:14.679455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.966 qpair failed and we were unable to recover it. 00:40:14.966 [2024-11-20 18:07:14.689364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.966 [2024-11-20 18:07:14.689418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.966 [2024-11-20 18:07:14.689431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.966 [2024-11-20 18:07:14.689438] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.966 [2024-11-20 18:07:14.689444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.966 [2024-11-20 18:07:14.689457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.966 qpair failed and we were unable to recover it. 00:40:14.966 [2024-11-20 18:07:14.699301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.966 [2024-11-20 18:07:14.699353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.966 [2024-11-20 18:07:14.699367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.967 [2024-11-20 18:07:14.699373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.967 [2024-11-20 18:07:14.699380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.967 [2024-11-20 18:07:14.699393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.967 qpair failed and we were unable to recover it. 00:40:14.967 [2024-11-20 18:07:14.709312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.967 [2024-11-20 18:07:14.709364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.967 [2024-11-20 18:07:14.709377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.967 [2024-11-20 18:07:14.709384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.967 [2024-11-20 18:07:14.709390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.967 [2024-11-20 18:07:14.709404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.967 qpair failed and we were unable to recover it. 00:40:14.967 [2024-11-20 18:07:14.719409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.967 [2024-11-20 18:07:14.719474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.967 [2024-11-20 18:07:14.719487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.967 [2024-11-20 18:07:14.719493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.967 [2024-11-20 18:07:14.719499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.967 [2024-11-20 18:07:14.719513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.967 qpair failed and we were unable to recover it. 00:40:14.967 [2024-11-20 18:07:14.729426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.967 [2024-11-20 18:07:14.729477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.967 [2024-11-20 18:07:14.729490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.967 [2024-11-20 18:07:14.729496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.967 [2024-11-20 18:07:14.729503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.967 [2024-11-20 18:07:14.729516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.967 qpair failed and we were unable to recover it. 00:40:14.967 [2024-11-20 18:07:14.739408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.967 [2024-11-20 18:07:14.739460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.967 [2024-11-20 18:07:14.739473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.967 [2024-11-20 18:07:14.739480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.967 [2024-11-20 18:07:14.739486] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.967 [2024-11-20 18:07:14.739499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.967 qpair failed and we were unable to recover it. 00:40:14.967 [2024-11-20 18:07:14.749431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.967 [2024-11-20 18:07:14.749481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.967 [2024-11-20 18:07:14.749494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.967 [2024-11-20 18:07:14.749501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.967 [2024-11-20 18:07:14.749511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.967 [2024-11-20 18:07:14.749525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.967 qpair failed and we were unable to recover it. 00:40:14.967 [2024-11-20 18:07:14.759429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.967 [2024-11-20 18:07:14.759496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.967 [2024-11-20 18:07:14.759509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.967 [2024-11-20 18:07:14.759516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.967 [2024-11-20 18:07:14.759522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.967 [2024-11-20 18:07:14.759535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.967 qpair failed and we were unable to recover it. 00:40:14.967 [2024-11-20 18:07:14.769527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.967 [2024-11-20 18:07:14.769578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.967 [2024-11-20 18:07:14.769591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.967 [2024-11-20 18:07:14.769598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.967 [2024-11-20 18:07:14.769604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.967 [2024-11-20 18:07:14.769618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.967 qpair failed and we were unable to recover it. 00:40:14.967 [2024-11-20 18:07:14.779439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.967 [2024-11-20 18:07:14.779496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.967 [2024-11-20 18:07:14.779510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.967 [2024-11-20 18:07:14.779517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.967 [2024-11-20 18:07:14.779523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.967 [2024-11-20 18:07:14.779540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.967 qpair failed and we were unable to recover it. 00:40:14.967 [2024-11-20 18:07:14.789552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.967 [2024-11-20 18:07:14.789600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.967 [2024-11-20 18:07:14.789614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.967 [2024-11-20 18:07:14.789621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.967 [2024-11-20 18:07:14.789627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.967 [2024-11-20 18:07:14.789641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.967 qpair failed and we were unable to recover it. 00:40:14.967 [2024-11-20 18:07:14.799563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.967 [2024-11-20 18:07:14.799615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.967 [2024-11-20 18:07:14.799628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.967 [2024-11-20 18:07:14.799635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.967 [2024-11-20 18:07:14.799641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.967 [2024-11-20 18:07:14.799654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.968 qpair failed and we were unable to recover it. 00:40:14.968 [2024-11-20 18:07:14.809685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.968 [2024-11-20 18:07:14.809740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.968 [2024-11-20 18:07:14.809753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.968 [2024-11-20 18:07:14.809759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.968 [2024-11-20 18:07:14.809766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.968 [2024-11-20 18:07:14.809779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.968 qpair failed and we were unable to recover it. 00:40:14.968 [2024-11-20 18:07:14.819686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.968 [2024-11-20 18:07:14.819765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.968 [2024-11-20 18:07:14.819778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.968 [2024-11-20 18:07:14.819785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.968 [2024-11-20 18:07:14.819791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.968 [2024-11-20 18:07:14.819804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.968 qpair failed and we were unable to recover it. 00:40:14.968 [2024-11-20 18:07:14.829633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.968 [2024-11-20 18:07:14.829683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.968 [2024-11-20 18:07:14.829696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.968 [2024-11-20 18:07:14.829703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.968 [2024-11-20 18:07:14.829709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.968 [2024-11-20 18:07:14.829723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.968 qpair failed and we were unable to recover it. 00:40:14.968 [2024-11-20 18:07:14.839648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.968 [2024-11-20 18:07:14.839695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.968 [2024-11-20 18:07:14.839709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.968 [2024-11-20 18:07:14.839719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.968 [2024-11-20 18:07:14.839725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.968 [2024-11-20 18:07:14.839738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.968 qpair failed and we were unable to recover it. 00:40:14.968 [2024-11-20 18:07:14.849767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.968 [2024-11-20 18:07:14.849854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.968 [2024-11-20 18:07:14.849867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.968 [2024-11-20 18:07:14.849874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.968 [2024-11-20 18:07:14.849880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.968 [2024-11-20 18:07:14.849893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.968 qpair failed and we were unable to recover it. 00:40:14.968 [2024-11-20 18:07:14.859744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.968 [2024-11-20 18:07:14.859792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.968 [2024-11-20 18:07:14.859805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.968 [2024-11-20 18:07:14.859812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.968 [2024-11-20 18:07:14.859818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.968 [2024-11-20 18:07:14.859832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.968 qpair failed and we were unable to recover it. 00:40:14.968 [2024-11-20 18:07:14.869757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:14.968 [2024-11-20 18:07:14.869804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:14.968 [2024-11-20 18:07:14.869817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:14.968 [2024-11-20 18:07:14.869823] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:14.968 [2024-11-20 18:07:14.869829] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:14.968 [2024-11-20 18:07:14.869843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:14.968 qpair failed and we were unable to recover it. 00:40:15.230 [2024-11-20 18:07:14.879785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.230 [2024-11-20 18:07:14.879830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.230 [2024-11-20 18:07:14.879844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.230 [2024-11-20 18:07:14.879850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.230 [2024-11-20 18:07:14.879857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.230 [2024-11-20 18:07:14.879870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.230 qpair failed and we were unable to recover it. 00:40:15.230 [2024-11-20 18:07:14.889739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.230 [2024-11-20 18:07:14.889792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.230 [2024-11-20 18:07:14.889805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.230 [2024-11-20 18:07:14.889812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.230 [2024-11-20 18:07:14.889818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.230 [2024-11-20 18:07:14.889832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.230 qpair failed and we were unable to recover it. 00:40:15.230 [2024-11-20 18:07:14.899809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.230 [2024-11-20 18:07:14.899884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.230 [2024-11-20 18:07:14.899897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.230 [2024-11-20 18:07:14.899904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.230 [2024-11-20 18:07:14.899910] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.230 [2024-11-20 18:07:14.899924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.230 qpair failed and we were unable to recover it. 00:40:15.230 [2024-11-20 18:07:14.909866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.230 [2024-11-20 18:07:14.909919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.230 [2024-11-20 18:07:14.909944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.230 [2024-11-20 18:07:14.909953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.230 [2024-11-20 18:07:14.909959] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.230 [2024-11-20 18:07:14.909979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.230 qpair failed and we were unable to recover it. 00:40:15.230 [2024-11-20 18:07:14.919908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.230 [2024-11-20 18:07:14.919965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.230 [2024-11-20 18:07:14.919990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.231 [2024-11-20 18:07:14.919998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.231 [2024-11-20 18:07:14.920005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.231 [2024-11-20 18:07:14.920024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.231 qpair failed and we were unable to recover it. 00:40:15.231 [2024-11-20 18:07:14.929975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.231 [2024-11-20 18:07:14.930033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.231 [2024-11-20 18:07:14.930048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.231 [2024-11-20 18:07:14.930059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.231 [2024-11-20 18:07:14.930066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.231 [2024-11-20 18:07:14.930081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.231 qpair failed and we were unable to recover it. 00:40:15.231 [2024-11-20 18:07:14.939915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.231 [2024-11-20 18:07:14.939999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.231 [2024-11-20 18:07:14.940013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.231 [2024-11-20 18:07:14.940019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.231 [2024-11-20 18:07:14.940026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.231 [2024-11-20 18:07:14.940040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.231 qpair failed and we were unable to recover it. 00:40:15.231 [2024-11-20 18:07:14.949951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.231 [2024-11-20 18:07:14.949997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.231 [2024-11-20 18:07:14.950010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.231 [2024-11-20 18:07:14.950017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.231 [2024-11-20 18:07:14.950023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.231 [2024-11-20 18:07:14.950037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.231 qpair failed and we were unable to recover it. 00:40:15.231 [2024-11-20 18:07:14.960009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.231 [2024-11-20 18:07:14.960058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.231 [2024-11-20 18:07:14.960071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.231 [2024-11-20 18:07:14.960078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.231 [2024-11-20 18:07:14.960084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.231 [2024-11-20 18:07:14.960097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.231 qpair failed and we were unable to recover it. 00:40:15.231 [2024-11-20 18:07:14.970066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.231 [2024-11-20 18:07:14.970123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.231 [2024-11-20 18:07:14.970137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.231 [2024-11-20 18:07:14.970144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.231 [2024-11-20 18:07:14.970150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.231 [2024-11-20 18:07:14.970167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.231 qpair failed and we were unable to recover it. 00:40:15.231 [2024-11-20 18:07:14.980062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.231 [2024-11-20 18:07:14.980118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.231 [2024-11-20 18:07:14.980131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.231 [2024-11-20 18:07:14.980138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.231 [2024-11-20 18:07:14.980144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.231 [2024-11-20 18:07:14.980161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.231 qpair failed and we were unable to recover it. 00:40:15.231 [2024-11-20 18:07:14.990080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.231 [2024-11-20 18:07:14.990132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.231 [2024-11-20 18:07:14.990145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.231 [2024-11-20 18:07:14.990151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.231 [2024-11-20 18:07:14.990162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.231 [2024-11-20 18:07:14.990176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.231 qpair failed and we were unable to recover it. 00:40:15.231 [2024-11-20 18:07:15.000098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.231 [2024-11-20 18:07:15.000147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.231 [2024-11-20 18:07:15.000164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.231 [2024-11-20 18:07:15.000171] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.231 [2024-11-20 18:07:15.000178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.231 [2024-11-20 18:07:15.000191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.231 qpair failed and we were unable to recover it. 00:40:15.231 [2024-11-20 18:07:15.010148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.231 [2024-11-20 18:07:15.010223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.231 [2024-11-20 18:07:15.010237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.231 [2024-11-20 18:07:15.010244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.231 [2024-11-20 18:07:15.010250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.231 [2024-11-20 18:07:15.010264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.231 qpair failed and we were unable to recover it. 00:40:15.231 [2024-11-20 18:07:15.020163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.231 [2024-11-20 18:07:15.020212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.231 [2024-11-20 18:07:15.020225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.231 [2024-11-20 18:07:15.020235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.231 [2024-11-20 18:07:15.020241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.231 [2024-11-20 18:07:15.020255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.231 qpair failed and we were unable to recover it. 00:40:15.231 [2024-11-20 18:07:15.030173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.231 [2024-11-20 18:07:15.030220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.231 [2024-11-20 18:07:15.030234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.232 [2024-11-20 18:07:15.030240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.232 [2024-11-20 18:07:15.030247] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.232 [2024-11-20 18:07:15.030260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.232 qpair failed and we were unable to recover it. 00:40:15.232 [2024-11-20 18:07:15.040190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.232 [2024-11-20 18:07:15.040239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.232 [2024-11-20 18:07:15.040252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.232 [2024-11-20 18:07:15.040259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.232 [2024-11-20 18:07:15.040265] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.232 [2024-11-20 18:07:15.040278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.232 qpair failed and we were unable to recover it. 00:40:15.232 [2024-11-20 18:07:15.050285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.232 [2024-11-20 18:07:15.050362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.232 [2024-11-20 18:07:15.050375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.232 [2024-11-20 18:07:15.050382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.232 [2024-11-20 18:07:15.050388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.232 [2024-11-20 18:07:15.050401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.232 qpair failed and we were unable to recover it. 00:40:15.232 [2024-11-20 18:07:15.060233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.232 [2024-11-20 18:07:15.060285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.232 [2024-11-20 18:07:15.060298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.232 [2024-11-20 18:07:15.060305] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.232 [2024-11-20 18:07:15.060311] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.232 [2024-11-20 18:07:15.060324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.232 qpair failed and we were unable to recover it. 00:40:15.232 [2024-11-20 18:07:15.070310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.232 [2024-11-20 18:07:15.070358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.232 [2024-11-20 18:07:15.070371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.232 [2024-11-20 18:07:15.070378] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.232 [2024-11-20 18:07:15.070384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.232 [2024-11-20 18:07:15.070397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.232 qpair failed and we were unable to recover it. 00:40:15.232 [2024-11-20 18:07:15.080330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.232 [2024-11-20 18:07:15.080379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.232 [2024-11-20 18:07:15.080392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.232 [2024-11-20 18:07:15.080399] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.232 [2024-11-20 18:07:15.080405] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.232 [2024-11-20 18:07:15.080418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.232 qpair failed and we were unable to recover it. 00:40:15.232 [2024-11-20 18:07:15.090374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.232 [2024-11-20 18:07:15.090429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.232 [2024-11-20 18:07:15.090442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.232 [2024-11-20 18:07:15.090448] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.232 [2024-11-20 18:07:15.090455] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.232 [2024-11-20 18:07:15.090468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.232 qpair failed and we were unable to recover it. 00:40:15.232 [2024-11-20 18:07:15.100374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.232 [2024-11-20 18:07:15.100425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.232 [2024-11-20 18:07:15.100438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.232 [2024-11-20 18:07:15.100444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.232 [2024-11-20 18:07:15.100450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.232 [2024-11-20 18:07:15.100463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.232 qpair failed and we were unable to recover it. 00:40:15.232 [2024-11-20 18:07:15.110407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.232 [2024-11-20 18:07:15.110457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.232 [2024-11-20 18:07:15.110469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.232 [2024-11-20 18:07:15.110479] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.232 [2024-11-20 18:07:15.110485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.232 [2024-11-20 18:07:15.110499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.232 qpair failed and we were unable to recover it. 00:40:15.232 [2024-11-20 18:07:15.120438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.232 [2024-11-20 18:07:15.120485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.232 [2024-11-20 18:07:15.120498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.232 [2024-11-20 18:07:15.120505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.232 [2024-11-20 18:07:15.120511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.232 [2024-11-20 18:07:15.120524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.232 qpair failed and we were unable to recover it. 00:40:15.232 [2024-11-20 18:07:15.130568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.232 [2024-11-20 18:07:15.130669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.232 [2024-11-20 18:07:15.130682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.232 [2024-11-20 18:07:15.130689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.232 [2024-11-20 18:07:15.130695] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.232 [2024-11-20 18:07:15.130708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.232 qpair failed and we were unable to recover it. 00:40:15.232 [2024-11-20 18:07:15.140541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.232 [2024-11-20 18:07:15.140594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.232 [2024-11-20 18:07:15.140607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.232 [2024-11-20 18:07:15.140614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.232 [2024-11-20 18:07:15.140620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.232 [2024-11-20 18:07:15.140633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.232 qpair failed and we were unable to recover it. 00:40:15.494 [2024-11-20 18:07:15.150522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.494 [2024-11-20 18:07:15.150571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.494 [2024-11-20 18:07:15.150584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.494 [2024-11-20 18:07:15.150591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.494 [2024-11-20 18:07:15.150597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.494 [2024-11-20 18:07:15.150610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.494 qpair failed and we were unable to recover it. 00:40:15.494 [2024-11-20 18:07:15.160533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.494 [2024-11-20 18:07:15.160583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.494 [2024-11-20 18:07:15.160596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.494 [2024-11-20 18:07:15.160603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.494 [2024-11-20 18:07:15.160609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.494 [2024-11-20 18:07:15.160622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.494 qpair failed and we were unable to recover it. 00:40:15.494 [2024-11-20 18:07:15.170616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.494 [2024-11-20 18:07:15.170671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.494 [2024-11-20 18:07:15.170683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.494 [2024-11-20 18:07:15.170690] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.494 [2024-11-20 18:07:15.170696] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.494 [2024-11-20 18:07:15.170710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.494 qpair failed and we were unable to recover it. 00:40:15.494 [2024-11-20 18:07:15.180618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.494 [2024-11-20 18:07:15.180668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.494 [2024-11-20 18:07:15.180681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.494 [2024-11-20 18:07:15.180688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.494 [2024-11-20 18:07:15.180694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.494 [2024-11-20 18:07:15.180707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.494 qpair failed and we were unable to recover it. 00:40:15.494 [2024-11-20 18:07:15.190619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.494 [2024-11-20 18:07:15.190665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.494 [2024-11-20 18:07:15.190679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.494 [2024-11-20 18:07:15.190685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.494 [2024-11-20 18:07:15.190691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.494 [2024-11-20 18:07:15.190704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.494 qpair failed and we were unable to recover it. 00:40:15.494 [2024-11-20 18:07:15.200646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.494 [2024-11-20 18:07:15.200691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.494 [2024-11-20 18:07:15.200703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.494 [2024-11-20 18:07:15.200713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.494 [2024-11-20 18:07:15.200720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.494 [2024-11-20 18:07:15.200733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.494 qpair failed and we were unable to recover it. 00:40:15.494 [2024-11-20 18:07:15.210706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.494 [2024-11-20 18:07:15.210767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.494 [2024-11-20 18:07:15.210779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.494 [2024-11-20 18:07:15.210786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.494 [2024-11-20 18:07:15.210792] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.494 [2024-11-20 18:07:15.210805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.494 qpair failed and we were unable to recover it. 00:40:15.494 [2024-11-20 18:07:15.220721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.495 [2024-11-20 18:07:15.220773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.495 [2024-11-20 18:07:15.220786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.495 [2024-11-20 18:07:15.220793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.495 [2024-11-20 18:07:15.220799] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.495 [2024-11-20 18:07:15.220812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.495 qpair failed and we were unable to recover it. 00:40:15.495 [2024-11-20 18:07:15.230729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.495 [2024-11-20 18:07:15.230778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.495 [2024-11-20 18:07:15.230791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.495 [2024-11-20 18:07:15.230798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.495 [2024-11-20 18:07:15.230804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.495 [2024-11-20 18:07:15.230817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.495 qpair failed and we were unable to recover it. 00:40:15.495 [2024-11-20 18:07:15.240719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.495 [2024-11-20 18:07:15.240808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.495 [2024-11-20 18:07:15.240822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.495 [2024-11-20 18:07:15.240829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.495 [2024-11-20 18:07:15.240835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.495 [2024-11-20 18:07:15.240848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.495 qpair failed and we were unable to recover it. 00:40:15.495 [2024-11-20 18:07:15.250832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.495 [2024-11-20 18:07:15.250884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.495 [2024-11-20 18:07:15.250897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.495 [2024-11-20 18:07:15.250903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.495 [2024-11-20 18:07:15.250909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.495 [2024-11-20 18:07:15.250923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.495 qpair failed and we were unable to recover it. 00:40:15.495 [2024-11-20 18:07:15.260827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.495 [2024-11-20 18:07:15.260882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.495 [2024-11-20 18:07:15.260907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.495 [2024-11-20 18:07:15.260915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.495 [2024-11-20 18:07:15.260922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.495 [2024-11-20 18:07:15.260940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.495 qpair failed and we were unable to recover it. 00:40:15.495 [2024-11-20 18:07:15.270835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.495 [2024-11-20 18:07:15.270899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.495 [2024-11-20 18:07:15.270924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.495 [2024-11-20 18:07:15.270932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.495 [2024-11-20 18:07:15.270939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.495 [2024-11-20 18:07:15.270958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.495 qpair failed and we were unable to recover it. 00:40:15.495 [2024-11-20 18:07:15.280834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.495 [2024-11-20 18:07:15.280886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.495 [2024-11-20 18:07:15.280911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.495 [2024-11-20 18:07:15.280920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.495 [2024-11-20 18:07:15.280927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.495 [2024-11-20 18:07:15.280945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.495 qpair failed and we were unable to recover it. 00:40:15.495 [2024-11-20 18:07:15.290948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.495 [2024-11-20 18:07:15.291005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.495 [2024-11-20 18:07:15.291030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.495 [2024-11-20 18:07:15.291047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.495 [2024-11-20 18:07:15.291054] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.495 [2024-11-20 18:07:15.291073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.495 qpair failed and we were unable to recover it. 00:40:15.495 [2024-11-20 18:07:15.300924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.495 [2024-11-20 18:07:15.300977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.495 [2024-11-20 18:07:15.300993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.495 [2024-11-20 18:07:15.301000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.495 [2024-11-20 18:07:15.301006] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.495 [2024-11-20 18:07:15.301021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.495 qpair failed and we were unable to recover it. 00:40:15.495 [2024-11-20 18:07:15.310906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.495 [2024-11-20 18:07:15.310956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.495 [2024-11-20 18:07:15.310970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.495 [2024-11-20 18:07:15.310976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.495 [2024-11-20 18:07:15.310983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.495 [2024-11-20 18:07:15.310997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.495 qpair failed and we were unable to recover it. 00:40:15.495 [2024-11-20 18:07:15.320954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.495 [2024-11-20 18:07:15.321003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.495 [2024-11-20 18:07:15.321016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.495 [2024-11-20 18:07:15.321023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.495 [2024-11-20 18:07:15.321030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.495 [2024-11-20 18:07:15.321043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.495 qpair failed and we were unable to recover it. 00:40:15.495 [2024-11-20 18:07:15.330951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.495 [2024-11-20 18:07:15.331014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.495 [2024-11-20 18:07:15.331027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.495 [2024-11-20 18:07:15.331034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.495 [2024-11-20 18:07:15.331040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.496 [2024-11-20 18:07:15.331054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.496 qpair failed and we were unable to recover it. 00:40:15.496 [2024-11-20 18:07:15.341059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.496 [2024-11-20 18:07:15.341114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.496 [2024-11-20 18:07:15.341127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.496 [2024-11-20 18:07:15.341134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.496 [2024-11-20 18:07:15.341140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.496 [2024-11-20 18:07:15.341154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.496 qpair failed and we were unable to recover it. 00:40:15.496 [2024-11-20 18:07:15.351080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.496 [2024-11-20 18:07:15.351130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.496 [2024-11-20 18:07:15.351143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.496 [2024-11-20 18:07:15.351150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.496 [2024-11-20 18:07:15.351156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.496 [2024-11-20 18:07:15.351174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.496 qpair failed and we were unable to recover it. 00:40:15.496 [2024-11-20 18:07:15.361100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.496 [2024-11-20 18:07:15.361147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.496 [2024-11-20 18:07:15.361164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.496 [2024-11-20 18:07:15.361171] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.496 [2024-11-20 18:07:15.361178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.496 [2024-11-20 18:07:15.361191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.496 qpair failed and we were unable to recover it. 00:40:15.496 [2024-11-20 18:07:15.371163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.496 [2024-11-20 18:07:15.371216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.496 [2024-11-20 18:07:15.371230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.496 [2024-11-20 18:07:15.371237] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.496 [2024-11-20 18:07:15.371243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.496 [2024-11-20 18:07:15.371256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.496 qpair failed and we were unable to recover it. 00:40:15.496 [2024-11-20 18:07:15.381046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.496 [2024-11-20 18:07:15.381141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.496 [2024-11-20 18:07:15.381157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.496 [2024-11-20 18:07:15.381167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.496 [2024-11-20 18:07:15.381174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.496 [2024-11-20 18:07:15.381188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.496 qpair failed and we were unable to recover it. 00:40:15.496 [2024-11-20 18:07:15.391186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.496 [2024-11-20 18:07:15.391236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.496 [2024-11-20 18:07:15.391250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.496 [2024-11-20 18:07:15.391257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.496 [2024-11-20 18:07:15.391263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.496 [2024-11-20 18:07:15.391277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.496 qpair failed and we were unable to recover it. 00:40:15.496 [2024-11-20 18:07:15.401199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.496 [2024-11-20 18:07:15.401247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.496 [2024-11-20 18:07:15.401260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.496 [2024-11-20 18:07:15.401267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.496 [2024-11-20 18:07:15.401273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.496 [2024-11-20 18:07:15.401287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.496 qpair failed and we were unable to recover it. 00:40:15.759 [2024-11-20 18:07:15.411262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.759 [2024-11-20 18:07:15.411322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.759 [2024-11-20 18:07:15.411335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.759 [2024-11-20 18:07:15.411342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.759 [2024-11-20 18:07:15.411348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.759 [2024-11-20 18:07:15.411362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.759 qpair failed and we were unable to recover it. 00:40:15.759 [2024-11-20 18:07:15.421245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.759 [2024-11-20 18:07:15.421296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.759 [2024-11-20 18:07:15.421309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.759 [2024-11-20 18:07:15.421316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.759 [2024-11-20 18:07:15.421322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.759 [2024-11-20 18:07:15.421335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.759 qpair failed and we were unable to recover it. 00:40:15.759 [2024-11-20 18:07:15.431310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.759 [2024-11-20 18:07:15.431384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.759 [2024-11-20 18:07:15.431397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.759 [2024-11-20 18:07:15.431403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.759 [2024-11-20 18:07:15.431410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.759 [2024-11-20 18:07:15.431422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.759 qpair failed and we were unable to recover it. 00:40:15.759 [2024-11-20 18:07:15.441288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.759 [2024-11-20 18:07:15.441337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.759 [2024-11-20 18:07:15.441351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.759 [2024-11-20 18:07:15.441357] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.759 [2024-11-20 18:07:15.441363] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.759 [2024-11-20 18:07:15.441377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.759 qpair failed and we were unable to recover it. 00:40:15.759 [2024-11-20 18:07:15.451384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.759 [2024-11-20 18:07:15.451440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.759 [2024-11-20 18:07:15.451455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.759 [2024-11-20 18:07:15.451462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.759 [2024-11-20 18:07:15.451468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.759 [2024-11-20 18:07:15.451486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.759 qpair failed and we were unable to recover it. 00:40:15.759 [2024-11-20 18:07:15.461363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.759 [2024-11-20 18:07:15.461416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.759 [2024-11-20 18:07:15.461429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.759 [2024-11-20 18:07:15.461437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.759 [2024-11-20 18:07:15.461443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.759 [2024-11-20 18:07:15.461456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.759 qpair failed and we were unable to recover it. 00:40:15.759 [2024-11-20 18:07:15.471259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.759 [2024-11-20 18:07:15.471310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.759 [2024-11-20 18:07:15.471326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.759 [2024-11-20 18:07:15.471333] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.759 [2024-11-20 18:07:15.471339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.759 [2024-11-20 18:07:15.471352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.759 qpair failed and we were unable to recover it. 00:40:15.759 [2024-11-20 18:07:15.481399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.759 [2024-11-20 18:07:15.481447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.759 [2024-11-20 18:07:15.481461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.759 [2024-11-20 18:07:15.481467] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.759 [2024-11-20 18:07:15.481474] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.759 [2024-11-20 18:07:15.481487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.759 qpair failed and we were unable to recover it. 00:40:15.759 [2024-11-20 18:07:15.491502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.759 [2024-11-20 18:07:15.491557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.759 [2024-11-20 18:07:15.491570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.759 [2024-11-20 18:07:15.491577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.759 [2024-11-20 18:07:15.491584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.759 [2024-11-20 18:07:15.491597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.759 qpair failed and we were unable to recover it. 00:40:15.759 [2024-11-20 18:07:15.501459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.759 [2024-11-20 18:07:15.501507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.759 [2024-11-20 18:07:15.501520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.759 [2024-11-20 18:07:15.501526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.759 [2024-11-20 18:07:15.501533] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.759 [2024-11-20 18:07:15.501546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.759 qpair failed and we were unable to recover it. 00:40:15.759 [2024-11-20 18:07:15.511478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.759 [2024-11-20 18:07:15.511555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.759 [2024-11-20 18:07:15.511570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.759 [2024-11-20 18:07:15.511577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.759 [2024-11-20 18:07:15.511583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.759 [2024-11-20 18:07:15.511597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.759 qpair failed and we were unable to recover it. 00:40:15.759 [2024-11-20 18:07:15.521499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.759 [2024-11-20 18:07:15.521548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.759 [2024-11-20 18:07:15.521561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.759 [2024-11-20 18:07:15.521568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.759 [2024-11-20 18:07:15.521574] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.759 [2024-11-20 18:07:15.521588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.759 qpair failed and we were unable to recover it. 00:40:15.759 [2024-11-20 18:07:15.531604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.759 [2024-11-20 18:07:15.531660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.759 [2024-11-20 18:07:15.531673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.759 [2024-11-20 18:07:15.531680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.760 [2024-11-20 18:07:15.531686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.760 [2024-11-20 18:07:15.531700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.760 qpair failed and we were unable to recover it. 00:40:15.760 [2024-11-20 18:07:15.541588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.760 [2024-11-20 18:07:15.541634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.760 [2024-11-20 18:07:15.541647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.760 [2024-11-20 18:07:15.541654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.760 [2024-11-20 18:07:15.541660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.760 [2024-11-20 18:07:15.541674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.760 qpair failed and we were unable to recover it. 00:40:15.760 [2024-11-20 18:07:15.551604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.760 [2024-11-20 18:07:15.551653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.760 [2024-11-20 18:07:15.551666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.760 [2024-11-20 18:07:15.551672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.760 [2024-11-20 18:07:15.551679] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.760 [2024-11-20 18:07:15.551692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.760 qpair failed and we were unable to recover it. 00:40:15.760 [2024-11-20 18:07:15.561594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.760 [2024-11-20 18:07:15.561643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.760 [2024-11-20 18:07:15.561659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.760 [2024-11-20 18:07:15.561666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.760 [2024-11-20 18:07:15.561672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.760 [2024-11-20 18:07:15.561685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.760 qpair failed and we were unable to recover it. 00:40:15.760 [2024-11-20 18:07:15.571701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.760 [2024-11-20 18:07:15.571758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.760 [2024-11-20 18:07:15.571771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.760 [2024-11-20 18:07:15.571778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.760 [2024-11-20 18:07:15.571784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.760 [2024-11-20 18:07:15.571797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.760 qpair failed and we were unable to recover it. 00:40:15.760 [2024-11-20 18:07:15.581694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.760 [2024-11-20 18:07:15.581741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.760 [2024-11-20 18:07:15.581754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.760 [2024-11-20 18:07:15.581761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.760 [2024-11-20 18:07:15.581767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.760 [2024-11-20 18:07:15.581780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.760 qpair failed and we were unable to recover it. 00:40:15.760 [2024-11-20 18:07:15.591709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.760 [2024-11-20 18:07:15.591765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.760 [2024-11-20 18:07:15.591780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.760 [2024-11-20 18:07:15.591787] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.760 [2024-11-20 18:07:15.591793] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.760 [2024-11-20 18:07:15.591812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.760 qpair failed and we were unable to recover it. 00:40:15.760 [2024-11-20 18:07:15.601731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.760 [2024-11-20 18:07:15.601777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.760 [2024-11-20 18:07:15.601791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.760 [2024-11-20 18:07:15.601798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.760 [2024-11-20 18:07:15.601804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.760 [2024-11-20 18:07:15.601821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.760 qpair failed and we were unable to recover it. 00:40:15.760 [2024-11-20 18:07:15.611809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.760 [2024-11-20 18:07:15.611868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.760 [2024-11-20 18:07:15.611880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.760 [2024-11-20 18:07:15.611887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.760 [2024-11-20 18:07:15.611894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.760 [2024-11-20 18:07:15.611907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.760 qpair failed and we were unable to recover it. 00:40:15.760 [2024-11-20 18:07:15.621809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.760 [2024-11-20 18:07:15.621865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.760 [2024-11-20 18:07:15.621891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.760 [2024-11-20 18:07:15.621899] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.760 [2024-11-20 18:07:15.621906] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.760 [2024-11-20 18:07:15.621925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.760 qpair failed and we were unable to recover it. 00:40:15.760 [2024-11-20 18:07:15.631782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.760 [2024-11-20 18:07:15.631842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.760 [2024-11-20 18:07:15.631867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.760 [2024-11-20 18:07:15.631876] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.760 [2024-11-20 18:07:15.631883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd152d0 00:40:15.760 [2024-11-20 18:07:15.631902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:15.760 qpair failed and we were unable to recover it. 00:40:15.760 [2024-11-20 18:07:15.641844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.760 [2024-11-20 18:07:15.641938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.760 [2024-11-20 18:07:15.642004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.760 [2024-11-20 18:07:15.642029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.760 [2024-11-20 18:07:15.642050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0b14000b90 00:40:15.760 [2024-11-20 18:07:15.642102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:15.760 qpair failed and we were unable to recover it. 00:40:15.760 [2024-11-20 18:07:15.651845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:15.760 [2024-11-20 18:07:15.651945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:15.760 [2024-11-20 18:07:15.651996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:15.760 [2024-11-20 18:07:15.652013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:15.760 [2024-11-20 18:07:15.652027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0b14000b90 00:40:15.760 [2024-11-20 18:07:15.652065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:15.760 qpair failed and we were unable to recover it. 00:40:15.760 [2024-11-20 18:07:15.652190] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:40:15.760 A controller has encountered a failure and is being reset. 00:40:15.760 [2024-11-20 18:07:15.652330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd231e0 (9): Bad file descriptor 00:40:15.760 Controller properly reset. 00:40:16.022 Initializing NVMe Controllers 00:40:16.022 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:16.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:16.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:40:16.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:40:16.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:40:16.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:40:16.022 Initialization complete. Launching workers. 00:40:16.022 Starting thread on core 1 00:40:16.022 Starting thread on core 2 00:40:16.022 Starting thread on core 3 00:40:16.022 Starting thread on core 0 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:40:16.022 00:40:16.022 real 0m11.369s 00:40:16.022 user 0m21.822s 00:40:16.022 sys 0m3.704s 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:16.022 ************************************ 00:40:16.022 END TEST nvmf_target_disconnect_tc2 00:40:16.022 ************************************ 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:16.022 rmmod nvme_tcp 00:40:16.022 rmmod nvme_fabrics 00:40:16.022 rmmod nvme_keyring 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@513 -- # '[' -n 2944405 ']' 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # killprocess 2944405 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2944405 ']' 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2944405 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2944405 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2944405' 00:40:16.022 killing process with pid 2944405 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2944405 00:40:16.022 18:07:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2944405 00:40:16.283 18:07:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:16.283 18:07:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:16.283 18:07:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:16.283 18:07:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:40:16.283 18:07:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:40:16.283 18:07:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:16.283 18:07:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:40:16.283 18:07:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:16.283 18:07:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:16.283 18:07:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:16.283 18:07:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:16.283 18:07:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:18.194 18:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:18.194 00:40:18.194 real 0m21.395s 00:40:18.194 user 0m49.726s 00:40:18.194 sys 0m9.563s 00:40:18.194 18:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:18.194 18:07:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:18.194 ************************************ 00:40:18.194 END TEST nvmf_target_disconnect 00:40:18.194 ************************************ 00:40:18.454 18:07:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:18.454 00:40:18.455 real 7m50.498s 00:40:18.455 user 17m22.696s 00:40:18.455 sys 2m24.425s 00:40:18.455 18:07:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:18.455 18:07:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:18.455 ************************************ 00:40:18.455 END TEST nvmf_host 00:40:18.455 ************************************ 00:40:18.455 18:07:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:40:18.455 18:07:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:40:18.455 18:07:18 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:40:18.455 18:07:18 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:18.455 18:07:18 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:18.455 18:07:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:18.455 ************************************ 00:40:18.455 START TEST nvmf_target_core_interrupt_mode 00:40:18.455 ************************************ 00:40:18.455 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:40:18.455 * Looking for test storage... 00:40:18.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:40:18.455 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:18.455 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:40:18.455 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:18.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:18.716 --rc genhtml_branch_coverage=1 00:40:18.716 --rc genhtml_function_coverage=1 00:40:18.716 --rc genhtml_legend=1 00:40:18.716 --rc geninfo_all_blocks=1 00:40:18.716 --rc geninfo_unexecuted_blocks=1 00:40:18.716 00:40:18.716 ' 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:18.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:18.716 --rc genhtml_branch_coverage=1 00:40:18.716 --rc genhtml_function_coverage=1 00:40:18.716 --rc genhtml_legend=1 00:40:18.716 --rc geninfo_all_blocks=1 00:40:18.716 --rc geninfo_unexecuted_blocks=1 00:40:18.716 00:40:18.716 ' 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:18.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:18.716 --rc genhtml_branch_coverage=1 00:40:18.716 --rc genhtml_function_coverage=1 00:40:18.716 --rc genhtml_legend=1 00:40:18.716 --rc geninfo_all_blocks=1 00:40:18.716 --rc geninfo_unexecuted_blocks=1 00:40:18.716 00:40:18.716 ' 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:18.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:18.716 --rc genhtml_branch_coverage=1 00:40:18.716 --rc genhtml_function_coverage=1 00:40:18.716 --rc genhtml_legend=1 00:40:18.716 --rc geninfo_all_blocks=1 00:40:18.716 --rc geninfo_unexecuted_blocks=1 00:40:18.716 00:40:18.716 ' 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.716 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:18.717 ************************************ 00:40:18.717 START TEST nvmf_abort 00:40:18.717 ************************************ 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:40:18.717 * Looking for test storage... 00:40:18.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:40:18.717 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:18.978 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:18.978 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:18.978 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:18.978 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:18.978 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:40:18.978 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:40:18.978 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:40:18.978 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:40:18.978 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:40:18.978 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:40:18.978 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:40:18.978 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:18.978 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:40:18.978 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:18.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:18.979 --rc genhtml_branch_coverage=1 00:40:18.979 --rc genhtml_function_coverage=1 00:40:18.979 --rc genhtml_legend=1 00:40:18.979 --rc geninfo_all_blocks=1 00:40:18.979 --rc geninfo_unexecuted_blocks=1 00:40:18.979 00:40:18.979 ' 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:18.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:18.979 --rc genhtml_branch_coverage=1 00:40:18.979 --rc genhtml_function_coverage=1 00:40:18.979 --rc genhtml_legend=1 00:40:18.979 --rc geninfo_all_blocks=1 00:40:18.979 --rc geninfo_unexecuted_blocks=1 00:40:18.979 00:40:18.979 ' 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:18.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:18.979 --rc genhtml_branch_coverage=1 00:40:18.979 --rc genhtml_function_coverage=1 00:40:18.979 --rc genhtml_legend=1 00:40:18.979 --rc geninfo_all_blocks=1 00:40:18.979 --rc geninfo_unexecuted_blocks=1 00:40:18.979 00:40:18.979 ' 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:18.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:18.979 --rc genhtml_branch_coverage=1 00:40:18.979 --rc genhtml_function_coverage=1 00:40:18.979 --rc genhtml_legend=1 00:40:18.979 --rc geninfo_all_blocks=1 00:40:18.979 --rc geninfo_unexecuted_blocks=1 00:40:18.979 00:40:18.979 ' 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:18.979 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:18.980 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:40:18.980 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:27.112 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:27.112 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:27.112 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:27.112 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:27.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:27.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:40:27.112 00:40:27.112 --- 10.0.0.2 ping statistics --- 00:40:27.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:27.112 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:27.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:27.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:40:27.112 00:40:27.112 --- 10.0.0.1 ping statistics --- 00:40:27.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:27.112 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=2949854 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 2949854 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2949854 ']' 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:27.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:27.112 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:27.112 [2024-11-20 18:07:26.005299] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:27.112 [2024-11-20 18:07:26.006430] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:40:27.112 [2024-11-20 18:07:26.006478] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:27.112 [2024-11-20 18:07:26.091183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:27.112 [2024-11-20 18:07:26.137930] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:27.112 [2024-11-20 18:07:26.137983] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:27.112 [2024-11-20 18:07:26.137996] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:27.112 [2024-11-20 18:07:26.138003] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:27.112 [2024-11-20 18:07:26.138009] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:27.112 [2024-11-20 18:07:26.138234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:27.112 [2024-11-20 18:07:26.138429] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:27.112 [2024-11-20 18:07:26.138429] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:40:27.112 [2024-11-20 18:07:26.202550] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:27.112 [2024-11-20 18:07:26.203474] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:27.112 [2024-11-20 18:07:26.204264] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:27.112 [2024-11-20 18:07:26.204365] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:27.112 [2024-11-20 18:07:26.875238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:27.112 Malloc0 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:27.112 Delay0 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:27.112 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:27.113 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:27.113 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:27.113 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:27.113 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:27.113 [2024-11-20 18:07:26.959131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:27.113 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:27.113 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:27.113 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:27.113 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:27.113 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:27.113 18:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:40:27.372 [2024-11-20 18:07:27.135359] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:40:29.281 Initializing NVMe Controllers 00:40:29.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:40:29.281 controller IO queue size 128 less than required 00:40:29.281 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:40:29.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:40:29.281 Initialization complete. Launching workers. 00:40:29.281 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29053 00:40:29.281 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29110, failed to submit 66 00:40:29.281 success 29053, unsuccessful 57, failed 0 00:40:29.281 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:29.281 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:29.282 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:29.282 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:29.282 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:40:29.282 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:40:29.282 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:29.282 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:40:29.282 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:29.282 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:40:29.282 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:29.282 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:29.541 rmmod nvme_tcp 00:40:29.541 rmmod nvme_fabrics 00:40:29.541 rmmod nvme_keyring 00:40:29.541 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:29.541 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:40:29.541 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:40:29.541 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 2949854 ']' 00:40:29.541 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 2949854 00:40:29.541 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2949854 ']' 00:40:29.541 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2949854 00:40:29.541 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:40:29.541 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:29.541 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2949854 00:40:29.541 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:29.541 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:29.541 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2949854' 00:40:29.541 killing process with pid 2949854 00:40:29.541 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2949854 00:40:29.541 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2949854 00:40:29.800 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:29.800 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:29.800 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:29.800 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:40:29.800 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:40:29.800 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:29.800 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:40:29.800 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:29.800 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:29.800 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:29.800 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:29.800 18:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:31.710 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:31.710 00:40:31.710 real 0m13.106s 00:40:31.710 user 0m10.903s 00:40:31.710 sys 0m6.646s 00:40:31.710 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:31.710 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:40:31.710 ************************************ 00:40:31.710 END TEST nvmf_abort 00:40:31.710 ************************************ 00:40:31.710 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:40:31.710 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:31.710 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:31.710 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:31.710 ************************************ 00:40:31.710 START TEST nvmf_ns_hotplug_stress 00:40:31.710 ************************************ 00:40:31.710 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:40:31.971 * Looking for test storage... 00:40:31.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:31.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:31.971 --rc genhtml_branch_coverage=1 00:40:31.971 --rc genhtml_function_coverage=1 00:40:31.971 --rc genhtml_legend=1 00:40:31.971 --rc geninfo_all_blocks=1 00:40:31.971 --rc geninfo_unexecuted_blocks=1 00:40:31.971 00:40:31.971 ' 00:40:31.971 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:31.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:31.971 --rc genhtml_branch_coverage=1 00:40:31.972 --rc genhtml_function_coverage=1 00:40:31.972 --rc genhtml_legend=1 00:40:31.972 --rc geninfo_all_blocks=1 00:40:31.972 --rc geninfo_unexecuted_blocks=1 00:40:31.972 00:40:31.972 ' 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:31.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:31.972 --rc genhtml_branch_coverage=1 00:40:31.972 --rc genhtml_function_coverage=1 00:40:31.972 --rc genhtml_legend=1 00:40:31.972 --rc geninfo_all_blocks=1 00:40:31.972 --rc geninfo_unexecuted_blocks=1 00:40:31.972 00:40:31.972 ' 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:31.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:31.972 --rc genhtml_branch_coverage=1 00:40:31.972 --rc genhtml_function_coverage=1 00:40:31.972 --rc genhtml_legend=1 00:40:31.972 --rc geninfo_all_blocks=1 00:40:31.972 --rc geninfo_unexecuted_blocks=1 00:40:31.972 00:40:31.972 ' 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:31.972 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:31.973 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:31.973 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:31.973 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:31.973 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:31.973 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:40:31.973 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:40.106 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:40.106 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:40.106 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:40.106 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:40.106 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:40.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:40.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:40:40.107 00:40:40.107 --- 10.0.0.2 ping statistics --- 00:40:40.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:40.107 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:40.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:40.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:40:40.107 00:40:40.107 --- 10.0.0.1 ping statistics --- 00:40:40.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:40.107 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:40.107 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=2954502 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 2954502 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2954502 ']' 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:40.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:40.107 [2024-11-20 18:07:39.094466] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:40.107 [2024-11-20 18:07:39.095594] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:40:40.107 [2024-11-20 18:07:39.095649] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:40.107 [2024-11-20 18:07:39.184612] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:40.107 [2024-11-20 18:07:39.230262] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:40.107 [2024-11-20 18:07:39.230306] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:40.107 [2024-11-20 18:07:39.230314] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:40.107 [2024-11-20 18:07:39.230321] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:40.107 [2024-11-20 18:07:39.230327] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:40.107 [2024-11-20 18:07:39.230484] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:40.107 [2024-11-20 18:07:39.231039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:40:40.107 [2024-11-20 18:07:39.231040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:40.107 [2024-11-20 18:07:39.288890] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:40.107 [2024-11-20 18:07:39.289059] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:40.107 [2024-11-20 18:07:39.289619] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:40.107 [2024-11-20 18:07:39.289914] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:40:40.107 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:40.369 [2024-11-20 18:07:40.091966] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:40.369 18:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:40.630 18:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:40.630 [2024-11-20 18:07:40.440600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:40.630 18:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:40.890 18:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:40:40.890 Malloc0 00:40:40.890 18:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:41.151 Delay0 00:40:41.151 18:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:41.412 18:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:40:41.412 NULL1 00:40:41.674 18:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:40:41.674 18:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2954873 00:40:41.674 18:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:41.674 18:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:40:41.674 18:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:41.935 18:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:42.195 18:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:40:42.195 18:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:40:42.195 true 00:40:42.195 18:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:42.195 18:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:42.455 18:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:42.715 18:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:40:42.715 18:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:40:42.715 true 00:40:42.976 18:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:42.976 18:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:42.976 Read completed with error (sct=0, sc=11) 00:40:42.976 18:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:42.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:42.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:43.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:43.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:43.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:43.236 18:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:40:43.236 18:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:40:43.498 true 00:40:43.498 18:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:43.499 18:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:44.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:44.441 18:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:44.442 18:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:40:44.442 18:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:40:44.702 true 00:40:44.702 18:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:44.702 18:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:44.702 18:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:44.964 18:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:40:44.964 18:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:40:45.224 true 00:40:45.224 18:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:45.224 18:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:45.485 18:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:45.485 18:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:40:45.485 18:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:40:45.747 true 00:40:45.747 18:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:45.747 18:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:46.007 18:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:46.007 18:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:40:46.007 18:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:40:46.267 true 00:40:46.267 18:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:46.267 18:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:46.527 18:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:46.527 18:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:40:46.527 18:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:40:46.788 true 00:40:46.788 18:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:46.788 18:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:47.048 18:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:47.309 18:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:40:47.309 18:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:40:47.309 true 00:40:47.309 18:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:47.309 18:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:48.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:48.693 18:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:48.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:48.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:48.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:48.693 18:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:40:48.693 18:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:40:48.693 true 00:40:48.954 18:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:48.954 18:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:48.954 18:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:49.214 18:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:40:49.214 18:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:40:49.474 true 00:40:49.474 18:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:49.474 18:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:49.474 18:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:49.734 18:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:40:49.734 18:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:40:49.994 true 00:40:49.994 18:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:49.994 18:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:50.254 18:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:50.254 18:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:40:50.254 18:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:40:50.513 true 00:40:50.513 18:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:50.513 18:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:51.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:51.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:51.895 18:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:51.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:51.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:51.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:51.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:51.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:51.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:51.895 18:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:40:51.895 18:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:40:52.155 true 00:40:52.155 18:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:52.155 18:07:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:53.094 18:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:53.094 18:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:40:53.094 18:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:40:53.355 true 00:40:53.355 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:53.355 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:53.355 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:53.616 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:40:53.616 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:40:53.876 true 00:40:53.876 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:53.876 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:54.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:55.075 18:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:55.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:55.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:55.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:55.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:55.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:55.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:55.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:55.075 18:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:40:55.075 18:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:40:55.334 true 00:40:55.334 18:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:55.334 18:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:56.273 18:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:56.273 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:40:56.273 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:40:56.532 true 00:40:56.532 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:56.532 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:56.793 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:56.793 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:40:56.793 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:40:57.054 true 00:40:57.054 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:57.054 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:58.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:58.436 18:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:58.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:58.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:58.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:58.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:58.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:58.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:58.436 18:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:40:58.436 18:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:40:58.436 true 00:40:58.696 18:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:58.696 18:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:59.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:59.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:40:59.524 18:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:59.524 18:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:40:59.524 18:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:40:59.784 true 00:40:59.784 18:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:40:59.784 18:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:00.044 18:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:00.044 18:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:41:00.044 18:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:41:00.303 true 00:41:00.303 18:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:41:00.303 18:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:00.599 18:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:00.600 18:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:41:00.600 18:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:41:00.914 true 00:41:00.914 18:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:41:00.914 18:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:01.174 18:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:01.174 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:41:01.174 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:41:01.432 true 00:41:01.432 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:41:01.432 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:02.807 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:02.807 18:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:02.807 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:02.807 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:02.807 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:02.807 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:02.807 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:02.807 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:02.807 18:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:41:02.807 18:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:41:02.807 true 00:41:02.807 18:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:41:02.807 18:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:03.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:03.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:03.745 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:04.005 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:41:04.005 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:41:04.005 true 00:41:04.005 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:41:04.005 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:04.264 18:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:04.522 18:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:41:04.522 18:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:41:04.522 true 00:41:04.781 18:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:41:04.782 18:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:05.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:05.979 18:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:05.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:05.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:05.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:05.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:05.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:05.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:05.979 18:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:41:05.979 18:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:41:06.239 true 00:41:06.239 18:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:41:06.239 18:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:07.179 18:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:07.179 18:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:41:07.179 18:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:41:07.438 true 00:41:07.438 18:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:41:07.438 18:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:07.696 18:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:07.696 18:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:41:07.696 18:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:41:07.954 true 00:41:07.954 18:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:41:07.954 18:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:09.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:09.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:09.331 18:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:09.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:09.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:09.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:09.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:09.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:09.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:41:09.331 18:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:41:09.331 18:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:41:09.331 true 00:41:09.590 18:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:41:09.590 18:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:10.159 18:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:10.418 18:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:41:10.418 18:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:41:10.677 true 00:41:10.677 18:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:41:10.677 18:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:10.937 18:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:10.937 18:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:41:10.937 18:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:41:11.195 true 00:41:11.195 18:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:41:11.195 18:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:12.573 Initializing NVMe Controllers 00:41:12.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:12.574 Controller IO queue size 128, less than required. 00:41:12.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:12.574 Controller IO queue size 128, less than required. 00:41:12.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:12.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:12.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:41:12.574 Initialization complete. Launching workers. 00:41:12.574 ======================================================== 00:41:12.574 Latency(us) 00:41:12.574 Device Information : IOPS MiB/s Average min max 00:41:12.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2051.26 1.00 35535.70 1492.74 1013847.81 00:41:12.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17050.64 8.33 7482.27 1113.76 499031.13 00:41:12.574 ======================================================== 00:41:12.574 Total : 19101.91 9.33 10494.79 1113.76 1013847.81 00:41:12.574 00:41:12.574 18:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:12.574 18:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:41:12.574 18:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:41:12.574 true 00:41:12.574 18:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954873 00:41:12.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2954873) - No such process 00:41:12.574 18:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2954873 00:41:12.574 18:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:12.833 18:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:13.092 18:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:41:13.092 18:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:41:13.092 18:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:41:13.092 18:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:13.092 18:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:41:13.092 null0 00:41:13.092 18:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:41:13.092 18:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:13.092 18:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:41:13.352 null1 00:41:13.352 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:41:13.352 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:13.352 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:41:13.611 null2 00:41:13.611 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:41:13.611 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:13.611 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:41:13.611 null3 00:41:13.611 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:41:13.611 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:13.611 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:41:13.870 null4 00:41:13.870 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:41:13.870 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:13.870 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:41:13.870 null5 00:41:13.870 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:41:13.871 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:13.871 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:41:14.129 null6 00:41:14.129 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:41:14.129 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:14.129 18:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:41:14.389 null7 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2960933 2960934 2960937 2960940 2960943 2960947 2960949 2960951 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:14.389 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.649 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:14.909 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.909 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.909 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:14.909 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.909 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.909 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:14.909 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:14.909 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:14.909 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:14.909 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:14.909 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:14.909 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:14.909 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:14.909 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:14.909 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:14.909 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:14.909 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:15.168 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.168 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.168 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:15.168 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.168 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.168 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:15.168 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.168 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.169 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:15.169 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.169 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.169 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:15.169 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.169 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.169 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:15.169 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.169 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.169 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:15.169 18:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:15.169 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.169 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.169 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:15.169 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:15.428 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.429 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:15.689 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.689 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.689 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:15.690 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:15.690 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:15.690 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:15.690 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:15.690 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:15.690 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.690 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.690 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:15.690 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:15.690 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:15.950 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:16.210 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:16.210 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:16.210 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:16.210 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.210 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.211 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:16.211 18:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.211 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:16.471 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:16.471 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:16.471 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:16.471 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:16.471 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:16.471 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.471 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.471 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:16.471 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:16.471 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:16.471 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.471 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.471 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:16.471 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.471 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.471 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:16.732 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:16.992 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.993 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.993 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:16.993 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:16.993 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:16.993 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:16.993 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:17.253 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:17.253 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:17.253 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:17.253 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:17.253 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:17.253 18:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:17.253 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:17.513 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:17.513 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:17.513 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:17.513 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:17.513 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:17.513 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:17.513 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:17.513 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:17.513 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:17.513 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:17.513 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:17.513 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:17.513 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:17.773 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:17.773 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:17.773 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:41:17.773 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:17.773 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:17.773 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:17.773 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:41:17.774 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:18.035 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:41:18.035 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:18.035 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:18.035 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:41:18.035 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:18.035 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:18.035 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:18.035 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:18.035 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:18.035 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:18.035 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:18.035 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:18.035 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:18.035 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:18.035 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:18.035 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:18.296 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:41:18.296 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:41:18.296 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:41:18.296 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:41:18.296 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:18.296 18:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:18.296 rmmod nvme_tcp 00:41:18.296 rmmod nvme_fabrics 00:41:18.296 rmmod nvme_keyring 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 2954502 ']' 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 2954502 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2954502 ']' 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2954502 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2954502 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2954502' 00:41:18.296 killing process with pid 2954502 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2954502 00:41:18.296 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2954502 00:41:18.556 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:18.556 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:18.556 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:18.556 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:41:18.556 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:41:18.556 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:18.556 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:41:18.556 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:18.556 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:18.556 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:18.556 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:18.556 18:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:20.468 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:20.468 00:41:20.468 real 0m48.740s 00:41:20.468 user 2m57.409s 00:41:20.468 sys 0m20.666s 00:41:20.468 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:20.468 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:41:20.468 ************************************ 00:41:20.468 END TEST nvmf_ns_hotplug_stress 00:41:20.468 ************************************ 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:20.729 ************************************ 00:41:20.729 START TEST nvmf_delete_subsystem 00:41:20.729 ************************************ 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:41:20.729 * Looking for test storage... 00:41:20.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:20.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.729 --rc genhtml_branch_coverage=1 00:41:20.729 --rc genhtml_function_coverage=1 00:41:20.729 --rc genhtml_legend=1 00:41:20.729 --rc geninfo_all_blocks=1 00:41:20.729 --rc geninfo_unexecuted_blocks=1 00:41:20.729 00:41:20.729 ' 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:20.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.729 --rc genhtml_branch_coverage=1 00:41:20.729 --rc genhtml_function_coverage=1 00:41:20.729 --rc genhtml_legend=1 00:41:20.729 --rc geninfo_all_blocks=1 00:41:20.729 --rc geninfo_unexecuted_blocks=1 00:41:20.729 00:41:20.729 ' 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:20.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.729 --rc genhtml_branch_coverage=1 00:41:20.729 --rc genhtml_function_coverage=1 00:41:20.729 --rc genhtml_legend=1 00:41:20.729 --rc geninfo_all_blocks=1 00:41:20.729 --rc geninfo_unexecuted_blocks=1 00:41:20.729 00:41:20.729 ' 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:20.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.729 --rc genhtml_branch_coverage=1 00:41:20.729 --rc genhtml_function_coverage=1 00:41:20.729 --rc genhtml_legend=1 00:41:20.729 --rc geninfo_all_blocks=1 00:41:20.729 --rc geninfo_unexecuted_blocks=1 00:41:20.729 00:41:20.729 ' 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:20.729 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:41:20.730 18:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:28.901 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:28.901 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:28.901 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:28.901 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:28.902 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:28.902 18:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:28.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:28.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:41:28.902 00:41:28.902 --- 10.0.0.2 ping statistics --- 00:41:28.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:28.902 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:28.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:28.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:41:28.902 00:41:28.902 --- 10.0.0.1 ping statistics --- 00:41:28.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:28.902 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=2966022 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 2966022 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2966022 ']' 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:28.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:28.902 [2024-11-20 18:08:28.125332] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:28.902 [2024-11-20 18:08:28.126457] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:41:28.902 [2024-11-20 18:08:28.126504] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:28.902 [2024-11-20 18:08:28.193791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:28.902 [2024-11-20 18:08:28.237430] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:28.902 [2024-11-20 18:08:28.237482] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:28.902 [2024-11-20 18:08:28.237488] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:28.902 [2024-11-20 18:08:28.237494] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:28.902 [2024-11-20 18:08:28.237498] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:28.902 [2024-11-20 18:08:28.237688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:28.902 [2024-11-20 18:08:28.237689] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:28.902 [2024-11-20 18:08:28.292766] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:28.902 [2024-11-20 18:08:28.293005] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:28.902 [2024-11-20 18:08:28.293433] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:28.902 [2024-11-20 18:08:28.362660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:28.902 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.903 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:28.903 [2024-11-20 18:08:28.403192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:28.903 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.903 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:41:28.903 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.903 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:28.903 NULL1 00:41:28.903 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.903 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:28.903 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.903 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:28.903 Delay0 00:41:28.903 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.903 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:28.903 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.903 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:28.903 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.903 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2966046 00:41:28.903 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:41:28.903 18:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:41:28.903 [2024-11-20 18:08:28.518342] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:41:30.820 18:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:30.820 18:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.820 18:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 starting I/O failed: -6 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Write completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 starting I/O failed: -6 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 starting I/O failed: -6 00:41:31.082 Write completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 starting I/O failed: -6 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Write completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 starting I/O failed: -6 00:41:31.082 Write completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 starting I/O failed: -6 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Write completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 starting I/O failed: -6 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 starting I/O failed: -6 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 starting I/O failed: -6 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.082 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 starting I/O failed: -6 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 starting I/O failed: -6 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 starting I/O failed: -6 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 [2024-11-20 18:08:30.765392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1320 is same with the state(6) to be set 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 starting I/O failed: -6 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 starting I/O failed: -6 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 starting I/O failed: -6 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 starting I/O failed: -6 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 starting I/O failed: -6 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 starting I/O failed: -6 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 starting I/O failed: -6 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 starting I/O failed: -6 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 starting I/O failed: -6 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 starting I/O failed: -6 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 [2024-11-20 18:08:30.766787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc8fc000c00 is same with the state(6) to be set 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Write completed with error (sct=0, sc=8) 00:41:31.083 Read completed with error (sct=0, sc=8) 00:41:32.024 [2024-11-20 18:08:31.739377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a80 is same with the state(6) to be set 00:41:32.024 Read completed with error (sct=0, sc=8) 00:41:32.024 Write completed with error (sct=0, sc=8) 00:41:32.024 Read completed with error (sct=0, sc=8) 00:41:32.024 Write completed with error (sct=0, sc=8) 00:41:32.024 Write completed with error (sct=0, sc=8) 00:41:32.024 Read completed with error (sct=0, sc=8) 00:41:32.024 Write completed with error (sct=0, sc=8) 00:41:32.024 Write completed with error (sct=0, sc=8) 00:41:32.024 Read completed with error (sct=0, sc=8) 00:41:32.024 Read completed with error (sct=0, sc=8) 00:41:32.024 Read completed with error (sct=0, sc=8) 00:41:32.024 Read completed with error (sct=0, sc=8) 00:41:32.024 Read completed with error (sct=0, sc=8) 00:41:32.024 Write completed with error (sct=0, sc=8) 00:41:32.024 Read completed with error (sct=0, sc=8) 00:41:32.024 Read completed with error (sct=0, sc=8) 00:41:32.024 Read completed with error (sct=0, sc=8) 00:41:32.024 Write completed with error (sct=0, sc=8) 00:41:32.024 Read completed with error (sct=0, sc=8) 00:41:32.024 Read completed with error (sct=0, sc=8) 00:41:32.024 Write completed with error (sct=0, sc=8) 00:41:32.024 Read completed with error (sct=0, sc=8) 00:41:32.024 Read completed with error (sct=0, sc=8) 00:41:32.024 Read completed with error (sct=0, sc=8) 00:41:32.024 Write completed with error (sct=0, sc=8) 00:41:32.024 Read completed with error (sct=0, sc=8) 00:41:32.024 [2024-11-20 18:08:31.767117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1e80 is same with the state(6) to be set 00:41:32.024 Write completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 [2024-11-20 18:08:31.767529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc8fc00cfe0 is same with the state(6) to be set 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 [2024-11-20 18:08:31.767600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc8fc00d780 is same with the state(6) to be set 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Write completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 Read completed with error (sct=0, sc=8) 00:41:32.025 [2024-11-20 18:08:31.769297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd1820 is same with the state(6) to be set 00:41:32.025 Initializing NVMe Controllers 00:41:32.025 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:32.025 Controller IO queue size 128, less than required. 00:41:32.025 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:32.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:32.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:32.025 Initialization complete. Launching workers. 00:41:32.025 ======================================================== 00:41:32.025 Latency(us) 00:41:32.025 Device Information : IOPS MiB/s Average min max 00:41:32.025 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.47 0.09 885130.29 394.44 1043231.47 00:41:32.025 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.59 0.08 928296.02 329.76 1011467.70 00:41:32.025 ======================================================== 00:41:32.025 Total : 330.06 0.16 905477.99 329.76 1043231.47 00:41:32.025 00:41:32.025 [2024-11-20 18:08:31.769649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd4a80 (9): Bad file descriptor 00:41:32.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:41:32.025 18:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:32.025 18:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:41:32.025 18:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2966046 00:41:32.025 18:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:41:32.684 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:41:32.684 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2966046 00:41:32.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2966046) - No such process 00:41:32.684 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2966046 00:41:32.684 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:41:32.684 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2966046 00:41:32.684 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:41:32.684 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2966046 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:32.685 [2024-11-20 18:08:32.302953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2966744 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2966744 00:41:32.685 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:41:32.685 [2024-11-20 18:08:32.385704] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:41:32.945 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:41:32.945 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2966744 00:41:32.945 18:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:41:33.516 18:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:41:33.516 18:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2966744 00:41:33.516 18:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:41:34.085 18:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:41:34.086 18:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2966744 00:41:34.086 18:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:41:34.656 18:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:41:34.656 18:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2966744 00:41:34.656 18:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:41:35.225 18:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:41:35.225 18:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2966744 00:41:35.225 18:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:41:35.485 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:41:35.485 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2966744 00:41:35.485 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:41:35.744 Initializing NVMe Controllers 00:41:35.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:35.744 Controller IO queue size 128, less than required. 00:41:35.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:35.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:35.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:35.744 Initialization complete. Launching workers. 00:41:35.744 ======================================================== 00:41:35.744 Latency(us) 00:41:35.744 Device Information : IOPS MiB/s Average min max 00:41:35.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002071.00 1000267.99 1005394.62 00:41:35.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004419.72 1000297.61 1043156.64 00:41:35.744 ======================================================== 00:41:35.744 Total : 256.00 0.12 1003245.36 1000267.99 1043156.64 00:41:35.744 00:41:36.005 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:41:36.005 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2966744 00:41:36.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2966744) - No such process 00:41:36.005 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2966744 00:41:36.005 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:41:36.005 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:41:36.005 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:36.005 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:41:36.005 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:36.005 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:41:36.005 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:36.005 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:36.005 rmmod nvme_tcp 00:41:36.005 rmmod nvme_fabrics 00:41:36.005 rmmod nvme_keyring 00:41:36.005 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:36.266 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:41:36.266 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:41:36.266 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 2966022 ']' 00:41:36.266 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 2966022 00:41:36.266 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2966022 ']' 00:41:36.266 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2966022 00:41:36.266 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:41:36.267 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:36.267 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2966022 00:41:36.267 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:36.267 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:36.267 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2966022' 00:41:36.267 killing process with pid 2966022 00:41:36.267 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2966022 00:41:36.267 18:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2966022 00:41:36.267 18:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:36.267 18:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:36.267 18:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:36.267 18:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:41:36.267 18:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:41:36.267 18:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:36.267 18:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:41:36.267 18:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:36.267 18:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:36.267 18:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:36.267 18:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:36.267 18:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:38.810 00:41:38.810 real 0m17.795s 00:41:38.810 user 0m26.813s 00:41:38.810 sys 0m7.393s 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:41:38.810 ************************************ 00:41:38.810 END TEST nvmf_delete_subsystem 00:41:38.810 ************************************ 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:38.810 ************************************ 00:41:38.810 START TEST nvmf_host_management 00:41:38.810 ************************************ 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:41:38.810 * Looking for test storage... 00:41:38.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:41:38.810 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:38.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.811 --rc genhtml_branch_coverage=1 00:41:38.811 --rc genhtml_function_coverage=1 00:41:38.811 --rc genhtml_legend=1 00:41:38.811 --rc geninfo_all_blocks=1 00:41:38.811 --rc geninfo_unexecuted_blocks=1 00:41:38.811 00:41:38.811 ' 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:38.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.811 --rc genhtml_branch_coverage=1 00:41:38.811 --rc genhtml_function_coverage=1 00:41:38.811 --rc genhtml_legend=1 00:41:38.811 --rc geninfo_all_blocks=1 00:41:38.811 --rc geninfo_unexecuted_blocks=1 00:41:38.811 00:41:38.811 ' 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:38.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.811 --rc genhtml_branch_coverage=1 00:41:38.811 --rc genhtml_function_coverage=1 00:41:38.811 --rc genhtml_legend=1 00:41:38.811 --rc geninfo_all_blocks=1 00:41:38.811 --rc geninfo_unexecuted_blocks=1 00:41:38.811 00:41:38.811 ' 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:38.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.811 --rc genhtml_branch_coverage=1 00:41:38.811 --rc genhtml_function_coverage=1 00:41:38.811 --rc genhtml_legend=1 00:41:38.811 --rc geninfo_all_blocks=1 00:41:38.811 --rc geninfo_unexecuted_blocks=1 00:41:38.811 00:41:38.811 ' 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:38.811 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:38.812 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:38.812 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:38.812 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:38.812 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:38.812 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:38.812 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:38.812 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:41:38.812 18:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:46.949 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:46.950 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:46.950 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:46.950 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:46.950 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:46.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:46.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:41:46.950 00:41:46.950 --- 10.0.0.2 ping statistics --- 00:41:46.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:46.950 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:46.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:46.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:41:46.950 00:41:46.950 --- 10.0.0.1 ping statistics --- 00:41:46.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:46.950 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:46.950 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=2971665 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 2971665 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2971665 ']' 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:46.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:46.951 18:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:46.951 [2024-11-20 18:08:45.865062] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:46.951 [2024-11-20 18:08:45.865931] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:41:46.951 [2024-11-20 18:08:45.865969] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:46.951 [2024-11-20 18:08:45.945430] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:46.951 [2024-11-20 18:08:45.994307] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:46.951 [2024-11-20 18:08:45.994365] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:46.951 [2024-11-20 18:08:45.994374] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:46.951 [2024-11-20 18:08:45.994381] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:46.951 [2024-11-20 18:08:45.994387] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:46.951 [2024-11-20 18:08:45.994551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:41:46.951 [2024-11-20 18:08:45.994703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:41:46.951 [2024-11-20 18:08:45.994862] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:46.951 [2024-11-20 18:08:45.994863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:41:46.951 [2024-11-20 18:08:46.070267] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:46.951 [2024-11-20 18:08:46.071198] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:46.951 [2024-11-20 18:08:46.071501] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:46.951 [2024-11-20 18:08:46.071723] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:46.951 [2024-11-20 18:08:46.071837] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:46.951 [2024-11-20 18:08:46.719836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:46.951 Malloc0 00:41:46.951 [2024-11-20 18:08:46.804104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2971730 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2971730 /var/tmp/bdevperf.sock 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2971730 ']' 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:41:46.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:46.951 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:41:47.212 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:47.212 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:47.212 { 00:41:47.212 "params": { 00:41:47.212 "name": "Nvme$subsystem", 00:41:47.212 "trtype": "$TEST_TRANSPORT", 00:41:47.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:47.212 "adrfam": "ipv4", 00:41:47.212 "trsvcid": "$NVMF_PORT", 00:41:47.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:47.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:47.212 "hdgst": ${hdgst:-false}, 00:41:47.212 "ddgst": ${ddgst:-false} 00:41:47.212 }, 00:41:47.212 "method": "bdev_nvme_attach_controller" 00:41:47.212 } 00:41:47.212 EOF 00:41:47.212 )") 00:41:47.212 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:41:47.212 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:41:47.212 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:41:47.212 18:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:47.212 "params": { 00:41:47.212 "name": "Nvme0", 00:41:47.212 "trtype": "tcp", 00:41:47.212 "traddr": "10.0.0.2", 00:41:47.212 "adrfam": "ipv4", 00:41:47.212 "trsvcid": "4420", 00:41:47.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:47.212 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:47.212 "hdgst": false, 00:41:47.212 "ddgst": false 00:41:47.212 }, 00:41:47.212 "method": "bdev_nvme_attach_controller" 00:41:47.212 }' 00:41:47.212 [2024-11-20 18:08:46.907595] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:41:47.212 [2024-11-20 18:08:46.907650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2971730 ] 00:41:47.212 [2024-11-20 18:08:46.983102] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:47.212 [2024-11-20 18:08:47.015095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:47.472 Running I/O for 10 seconds... 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=687 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 687 -ge 100 ']' 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.045 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:48.045 [2024-11-20 18:08:47.776063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:48.045 [2024-11-20 18:08:47.776114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.045 [2024-11-20 18:08:47.776125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:48.045 [2024-11-20 18:08:47.776133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.045 [2024-11-20 18:08:47.776142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:48.045 [2024-11-20 18:08:47.776150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.045 [2024-11-20 18:08:47.776162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:48.045 [2024-11-20 18:08:47.776170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.045 [2024-11-20 18:08:47.776178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c26b0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68da0 is same with the state(6) to be set 00:41:48.045 [2024-11-20 18:08:47.777767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.045 [2024-11-20 18:08:47.777792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.045 [2024-11-20 18:08:47.777809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.045 [2024-11-20 18:08:47.777817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.045 [2024-11-20 18:08:47.777827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.045 [2024-11-20 18:08:47.777835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.777844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.777852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.777862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.777869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.777878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.777885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.777895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.777910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.777920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.777927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.777936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.777944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.777953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.777961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.777970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.777977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.777987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.777994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.046 [2024-11-20 18:08:47.778501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.046 [2024-11-20 18:08:47.778508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:48.047 [2024-11-20 18:08:47.778884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.778948] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19be960 was disconnected and freed. reset controller. 00:41:48.047 [2024-11-20 18:08:47.780172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:48.047 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.047 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:41:48.047 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:48.047 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:48.047 task offset: 102144 on job bdev=Nvme0n1 fails 00:41:48.047 00:41:48.047 Latency(us) 00:41:48.047 [2024-11-20T17:08:47.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:48.047 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:41:48.047 Job: Nvme0n1 ended in about 0.47 seconds with error 00:41:48.047 Verification LBA range: start 0x0 length 0x400 00:41:48.047 Nvme0n1 : 0.47 1657.34 103.58 137.22 0.00 34649.04 3276.80 34297.17 00:41:48.047 [2024-11-20T17:08:47.963Z] =================================================================================================================== 00:41:48.047 [2024-11-20T17:08:47.963Z] Total : 1657.34 103.58 137.22 0.00 34649.04 3276.80 34297.17 00:41:48.047 [2024-11-20 18:08:47.782224] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:48.047 [2024-11-20 18:08:47.782250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c26b0 (9): Bad file descriptor 00:41:48.047 [2024-11-20 18:08:47.783604] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:41:48.047 [2024-11-20 18:08:47.783723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:41:48.047 [2024-11-20 18:08:47.783744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:48.047 [2024-11-20 18:08:47.783757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:41:48.047 [2024-11-20 18:08:47.783765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:41:48.047 [2024-11-20 18:08:47.783772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:48.047 [2024-11-20 18:08:47.783779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19c26b0 00:41:48.047 [2024-11-20 18:08:47.783799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c26b0 (9): Bad file descriptor 00:41:48.047 [2024-11-20 18:08:47.783812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:48.047 [2024-11-20 18:08:47.783819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:41:48.047 [2024-11-20 18:08:47.783829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:48.047 [2024-11-20 18:08:47.783841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:48.047 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:48.047 18:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:41:48.989 18:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2971730 00:41:48.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2971730) - No such process 00:41:48.989 18:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:41:48.989 18:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:41:48.989 18:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:41:48.989 18:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:41:48.989 18:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:41:48.989 18:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:41:48.989 18:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:48.989 18:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:48.989 { 00:41:48.989 "params": { 00:41:48.989 "name": "Nvme$subsystem", 00:41:48.989 "trtype": "$TEST_TRANSPORT", 00:41:48.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:48.989 "adrfam": "ipv4", 00:41:48.989 "trsvcid": "$NVMF_PORT", 00:41:48.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:48.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:48.989 "hdgst": ${hdgst:-false}, 00:41:48.989 "ddgst": ${ddgst:-false} 00:41:48.989 }, 00:41:48.989 "method": "bdev_nvme_attach_controller" 00:41:48.989 } 00:41:48.989 EOF 00:41:48.989 )") 00:41:48.989 18:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:41:48.989 18:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:41:48.989 18:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:41:48.989 18:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:48.989 "params": { 00:41:48.989 "name": "Nvme0", 00:41:48.989 "trtype": "tcp", 00:41:48.989 "traddr": "10.0.0.2", 00:41:48.989 "adrfam": "ipv4", 00:41:48.989 "trsvcid": "4420", 00:41:48.989 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:48.989 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:48.989 "hdgst": false, 00:41:48.989 "ddgst": false 00:41:48.989 }, 00:41:48.989 "method": "bdev_nvme_attach_controller" 00:41:48.989 }' 00:41:48.989 [2024-11-20 18:08:48.854064] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:41:48.989 [2024-11-20 18:08:48.854127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2972111 ] 00:41:49.249 [2024-11-20 18:08:48.930923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:49.249 [2024-11-20 18:08:48.962330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:49.249 Running I/O for 1 seconds... 00:41:50.632 2114.00 IOPS, 132.12 MiB/s 00:41:50.632 Latency(us) 00:41:50.632 [2024-11-20T17:08:50.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:50.632 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:41:50.632 Verification LBA range: start 0x0 length 0x400 00:41:50.632 Nvme0n1 : 1.01 2159.60 134.98 0.00 0.00 28999.30 2321.07 30801.92 00:41:50.632 [2024-11-20T17:08:50.548Z] =================================================================================================================== 00:41:50.632 [2024-11-20T17:08:50.548Z] Total : 2159.60 134.98 0.00 0.00 28999.30 2321.07 30801.92 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:50.632 rmmod nvme_tcp 00:41:50.632 rmmod nvme_fabrics 00:41:50.632 rmmod nvme_keyring 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 2971665 ']' 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 2971665 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2971665 ']' 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2971665 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2971665 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2971665' 00:41:50.632 killing process with pid 2971665 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2971665 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2971665 00:41:50.632 [2024-11-20 18:08:50.521860] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:41:50.632 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:50.893 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:50.893 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:50.893 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:41:50.893 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:41:50.893 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:50.893 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:41:50.893 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:50.893 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:50.893 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:50.893 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:50.893 18:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:52.801 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:52.801 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:41:52.801 00:41:52.801 real 0m14.385s 00:41:52.801 user 0m18.755s 00:41:52.801 sys 0m7.291s 00:41:52.801 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:52.801 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:52.801 ************************************ 00:41:52.801 END TEST nvmf_host_management 00:41:52.801 ************************************ 00:41:52.801 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:41:52.801 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:52.801 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:52.801 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:52.801 ************************************ 00:41:52.801 START TEST nvmf_lvol 00:41:52.801 ************************************ 00:41:52.801 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:41:53.063 * Looking for test storage... 00:41:53.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:53.063 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:53.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.064 --rc genhtml_branch_coverage=1 00:41:53.064 --rc genhtml_function_coverage=1 00:41:53.064 --rc genhtml_legend=1 00:41:53.064 --rc geninfo_all_blocks=1 00:41:53.064 --rc geninfo_unexecuted_blocks=1 00:41:53.064 00:41:53.064 ' 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:53.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.064 --rc genhtml_branch_coverage=1 00:41:53.064 --rc genhtml_function_coverage=1 00:41:53.064 --rc genhtml_legend=1 00:41:53.064 --rc geninfo_all_blocks=1 00:41:53.064 --rc geninfo_unexecuted_blocks=1 00:41:53.064 00:41:53.064 ' 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:53.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.064 --rc genhtml_branch_coverage=1 00:41:53.064 --rc genhtml_function_coverage=1 00:41:53.064 --rc genhtml_legend=1 00:41:53.064 --rc geninfo_all_blocks=1 00:41:53.064 --rc geninfo_unexecuted_blocks=1 00:41:53.064 00:41:53.064 ' 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:53.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.064 --rc genhtml_branch_coverage=1 00:41:53.064 --rc genhtml_function_coverage=1 00:41:53.064 --rc genhtml_legend=1 00:41:53.064 --rc geninfo_all_blocks=1 00:41:53.064 --rc geninfo_unexecuted_blocks=1 00:41:53.064 00:41:53.064 ' 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:53.064 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:53.065 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:53.065 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:41:53.065 18:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:42:01.204 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:42:01.204 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:42:01.204 Found net devices under 0000:4b:00.0: cvl_0_0 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:42:01.204 Found net devices under 0000:4b:00.1: cvl_0_1 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:01.204 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:01.205 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:01.205 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:01.205 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:01.205 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:01.205 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:01.205 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:01.205 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:01.205 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:01.205 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:01.205 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:01.205 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:01.205 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:01.205 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:01.205 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:01.205 18:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:01.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:01.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:42:01.205 00:42:01.205 --- 10.0.0.2 ping statistics --- 00:42:01.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:01.205 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:01.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:01.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:42:01.205 00:42:01.205 --- 10.0.0.1 ping statistics --- 00:42:01.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:01.205 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=2976590 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 2976590 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2976590 ']' 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:01.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:01.205 18:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:42:01.205 [2024-11-20 18:09:00.202789] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:01.205 [2024-11-20 18:09:00.203758] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:42:01.205 [2024-11-20 18:09:00.203796] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:01.205 [2024-11-20 18:09:00.287078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:01.205 [2024-11-20 18:09:00.319633] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:01.205 [2024-11-20 18:09:00.319669] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:01.205 [2024-11-20 18:09:00.319677] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:01.205 [2024-11-20 18:09:00.319684] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:01.205 [2024-11-20 18:09:00.319690] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:01.205 [2024-11-20 18:09:00.319827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:01.205 [2024-11-20 18:09:00.319971] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:01.205 [2024-11-20 18:09:00.319973] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:42:01.205 [2024-11-20 18:09:00.380540] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:01.205 [2024-11-20 18:09:00.381510] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:01.205 [2024-11-20 18:09:00.382624] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:01.205 [2024-11-20 18:09:00.382666] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:01.205 18:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:01.205 18:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:42:01.205 18:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:01.205 18:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:01.205 18:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:42:01.205 18:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:01.205 18:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:42:01.466 [2024-11-20 18:09:01.220887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:01.466 18:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:01.726 18:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:42:01.726 18:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:01.987 18:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:42:01.988 18:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:42:01.988 18:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:42:02.248 18:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e8adab51-28e8-42ae-9f6f-c532050f9e31 00:42:02.248 18:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e8adab51-28e8-42ae-9f6f-c532050f9e31 lvol 20 00:42:02.508 18:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=00d0d56b-ec42-4f95-9114-cd7ac7047fdf 00:42:02.508 18:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:42:02.508 18:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 00d0d56b-ec42-4f95-9114-cd7ac7047fdf 00:42:02.768 18:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:03.028 [2024-11-20 18:09:02.700688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:03.029 18:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:03.029 18:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2977170 00:42:03.029 18:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:42:03.029 18:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:42:04.414 18:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 00d0d56b-ec42-4f95-9114-cd7ac7047fdf MY_SNAPSHOT 00:42:04.414 18:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=bb459ad2-ebaa-43e9-b8dc-4aa862a619c1 00:42:04.414 18:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 00d0d56b-ec42-4f95-9114-cd7ac7047fdf 30 00:42:04.675 18:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone bb459ad2-ebaa-43e9-b8dc-4aa862a619c1 MY_CLONE 00:42:04.936 18:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3799f80c-c9dd-4d57-8cf1-4f2280996527 00:42:04.936 18:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3799f80c-c9dd-4d57-8cf1-4f2280996527 00:42:05.197 18:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2977170 00:42:13.350 Initializing NVMe Controllers 00:42:13.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:42:13.350 Controller IO queue size 128, less than required. 00:42:13.350 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:13.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:42:13.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:42:13.350 Initialization complete. Launching workers. 00:42:13.350 ======================================================== 00:42:13.350 Latency(us) 00:42:13.350 Device Information : IOPS MiB/s Average min max 00:42:13.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15076.80 58.89 8490.56 729.45 51095.88 00:42:13.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15873.90 62.01 8064.95 2702.69 52749.79 00:42:13.350 ======================================================== 00:42:13.350 Total : 30950.70 120.90 8272.27 729.45 52749.79 00:42:13.350 00:42:13.350 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:13.611 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 00d0d56b-ec42-4f95-9114-cd7ac7047fdf 00:42:13.872 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e8adab51-28e8-42ae-9f6f-c532050f9e31 00:42:13.872 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:42:13.872 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:42:13.872 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:42:13.872 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:13.872 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:42:13.872 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:13.872 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:42:13.872 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:13.872 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:13.872 rmmod nvme_tcp 00:42:13.872 rmmod nvme_fabrics 00:42:14.132 rmmod nvme_keyring 00:42:14.132 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:14.132 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:42:14.132 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:42:14.132 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 2976590 ']' 00:42:14.132 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 2976590 00:42:14.132 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2976590 ']' 00:42:14.132 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2976590 00:42:14.132 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:42:14.132 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:14.132 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2976590 00:42:14.132 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:14.132 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:14.132 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2976590' 00:42:14.132 killing process with pid 2976590 00:42:14.132 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2976590 00:42:14.132 18:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2976590 00:42:14.132 18:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:14.132 18:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:14.132 18:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:14.132 18:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:42:14.132 18:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:42:14.132 18:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:14.132 18:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:42:14.132 18:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:14.132 18:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:14.132 18:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:14.132 18:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:14.132 18:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:16.677 00:42:16.677 real 0m23.430s 00:42:16.677 user 0m55.342s 00:42:16.677 sys 0m10.441s 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:42:16.677 ************************************ 00:42:16.677 END TEST nvmf_lvol 00:42:16.677 ************************************ 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:16.677 ************************************ 00:42:16.677 START TEST nvmf_lvs_grow 00:42:16.677 ************************************ 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:42:16.677 * Looking for test storage... 00:42:16.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:42:16.677 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:16.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:16.678 --rc genhtml_branch_coverage=1 00:42:16.678 --rc genhtml_function_coverage=1 00:42:16.678 --rc genhtml_legend=1 00:42:16.678 --rc geninfo_all_blocks=1 00:42:16.678 --rc geninfo_unexecuted_blocks=1 00:42:16.678 00:42:16.678 ' 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:16.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:16.678 --rc genhtml_branch_coverage=1 00:42:16.678 --rc genhtml_function_coverage=1 00:42:16.678 --rc genhtml_legend=1 00:42:16.678 --rc geninfo_all_blocks=1 00:42:16.678 --rc geninfo_unexecuted_blocks=1 00:42:16.678 00:42:16.678 ' 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:16.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:16.678 --rc genhtml_branch_coverage=1 00:42:16.678 --rc genhtml_function_coverage=1 00:42:16.678 --rc genhtml_legend=1 00:42:16.678 --rc geninfo_all_blocks=1 00:42:16.678 --rc geninfo_unexecuted_blocks=1 00:42:16.678 00:42:16.678 ' 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:16.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:16.678 --rc genhtml_branch_coverage=1 00:42:16.678 --rc genhtml_function_coverage=1 00:42:16.678 --rc genhtml_legend=1 00:42:16.678 --rc geninfo_all_blocks=1 00:42:16.678 --rc geninfo_unexecuted_blocks=1 00:42:16.678 00:42:16.678 ' 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:42:16.678 18:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:42:24.828 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:42:24.828 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:42:24.828 Found net devices under 0000:4b:00.0: cvl_0_0 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:42:24.828 Found net devices under 0000:4b:00.1: cvl_0_1 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:24.828 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:24.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:24.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:42:24.829 00:42:24.829 --- 10.0.0.2 ping statistics --- 00:42:24.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:24.829 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:24.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:24.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:42:24.829 00:42:24.829 --- 10.0.0.1 ping statistics --- 00:42:24.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:24.829 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=2983740 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 2983740 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2983740 ']' 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:24.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:24.829 18:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:24.829 [2024-11-20 18:09:23.718420] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:24.829 [2024-11-20 18:09:23.719435] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:42:24.829 [2024-11-20 18:09:23.719473] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:24.829 [2024-11-20 18:09:23.799364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:24.829 [2024-11-20 18:09:23.830361] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:24.829 [2024-11-20 18:09:23.830398] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:24.829 [2024-11-20 18:09:23.830408] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:24.829 [2024-11-20 18:09:23.830416] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:24.829 [2024-11-20 18:09:23.830421] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:24.829 [2024-11-20 18:09:23.830441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:24.829 [2024-11-20 18:09:23.878267] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:24.829 [2024-11-20 18:09:23.878520] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:24.829 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:24.829 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:42:24.829 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:24.829 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:24.829 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:24.829 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:24.829 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:42:24.829 [2024-11-20 18:09:24.723265] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:25.089 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:42:25.089 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:25.089 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:25.089 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:25.089 ************************************ 00:42:25.089 START TEST lvs_grow_clean 00:42:25.089 ************************************ 00:42:25.089 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:42:25.089 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:42:25.089 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:42:25.089 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:42:25.089 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:42:25.089 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:42:25.089 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:42:25.089 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:25.089 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:25.089 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:25.089 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:42:25.089 18:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:42:25.348 18:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=65695431-2401-4e70-b56f-1c83f570c16f 00:42:25.348 18:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65695431-2401-4e70-b56f-1c83f570c16f 00:42:25.348 18:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:42:25.607 18:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:42:25.607 18:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:42:25.607 18:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 65695431-2401-4e70-b56f-1c83f570c16f lvol 150 00:42:25.607 18:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3ec4e8e0-4424-46c3-a97d-76c742c96d22 00:42:25.607 18:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:25.607 18:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:42:25.866 [2024-11-20 18:09:25.642871] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:42:25.866 [2024-11-20 18:09:25.643013] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:42:25.866 true 00:42:25.866 18:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65695431-2401-4e70-b56f-1c83f570c16f 00:42:25.866 18:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:42:26.125 18:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:42:26.125 18:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:42:26.125 18:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3ec4e8e0-4424-46c3-a97d-76c742c96d22 00:42:26.429 18:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:26.429 [2024-11-20 18:09:26.283436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:26.429 18:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:26.761 18:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2984279 00:42:26.761 18:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:26.761 18:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:42:26.761 18:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2984279 /var/tmp/bdevperf.sock 00:42:26.761 18:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2984279 ']' 00:42:26.761 18:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:26.761 18:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:26.761 18:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:26.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:26.761 18:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:26.761 18:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:42:26.761 [2024-11-20 18:09:26.529883] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:42:26.761 [2024-11-20 18:09:26.529939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2984279 ] 00:42:26.761 [2024-11-20 18:09:26.592731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:26.761 [2024-11-20 18:09:26.631764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:27.021 18:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:27.021 18:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:42:27.021 18:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:42:27.281 Nvme0n1 00:42:27.281 18:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:42:27.541 [ 00:42:27.541 { 00:42:27.541 "name": "Nvme0n1", 00:42:27.541 "aliases": [ 00:42:27.541 "3ec4e8e0-4424-46c3-a97d-76c742c96d22" 00:42:27.541 ], 00:42:27.541 "product_name": "NVMe disk", 00:42:27.541 "block_size": 4096, 00:42:27.541 "num_blocks": 38912, 00:42:27.541 "uuid": "3ec4e8e0-4424-46c3-a97d-76c742c96d22", 00:42:27.541 "numa_id": 0, 00:42:27.541 "assigned_rate_limits": { 00:42:27.541 "rw_ios_per_sec": 0, 00:42:27.541 "rw_mbytes_per_sec": 0, 00:42:27.541 "r_mbytes_per_sec": 0, 00:42:27.541 "w_mbytes_per_sec": 0 00:42:27.541 }, 00:42:27.541 "claimed": false, 00:42:27.541 "zoned": false, 00:42:27.541 "supported_io_types": { 00:42:27.541 "read": true, 00:42:27.541 "write": true, 00:42:27.541 "unmap": true, 00:42:27.541 "flush": true, 00:42:27.541 "reset": true, 00:42:27.541 "nvme_admin": true, 00:42:27.541 "nvme_io": true, 00:42:27.541 "nvme_io_md": false, 00:42:27.541 "write_zeroes": true, 00:42:27.541 "zcopy": false, 00:42:27.541 "get_zone_info": false, 00:42:27.541 "zone_management": false, 00:42:27.541 "zone_append": false, 00:42:27.541 "compare": true, 00:42:27.541 "compare_and_write": true, 00:42:27.541 "abort": true, 00:42:27.541 "seek_hole": false, 00:42:27.541 "seek_data": false, 00:42:27.541 "copy": true, 00:42:27.541 "nvme_iov_md": false 00:42:27.541 }, 00:42:27.541 "memory_domains": [ 00:42:27.541 { 00:42:27.541 "dma_device_id": "system", 00:42:27.541 "dma_device_type": 1 00:42:27.541 } 00:42:27.541 ], 00:42:27.541 "driver_specific": { 00:42:27.541 "nvme": [ 00:42:27.541 { 00:42:27.541 "trid": { 00:42:27.541 "trtype": "TCP", 00:42:27.541 "adrfam": "IPv4", 00:42:27.541 "traddr": "10.0.0.2", 00:42:27.541 "trsvcid": "4420", 00:42:27.541 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:42:27.541 }, 00:42:27.541 "ctrlr_data": { 00:42:27.541 "cntlid": 1, 00:42:27.541 "vendor_id": "0x8086", 00:42:27.541 "model_number": "SPDK bdev Controller", 00:42:27.541 "serial_number": "SPDK0", 00:42:27.541 "firmware_revision": "24.09.1", 00:42:27.541 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:27.541 "oacs": { 00:42:27.541 "security": 0, 00:42:27.541 "format": 0, 00:42:27.541 "firmware": 0, 00:42:27.541 "ns_manage": 0 00:42:27.541 }, 00:42:27.541 "multi_ctrlr": true, 00:42:27.541 "ana_reporting": false 00:42:27.541 }, 00:42:27.541 "vs": { 00:42:27.541 "nvme_version": "1.3" 00:42:27.541 }, 00:42:27.541 "ns_data": { 00:42:27.541 "id": 1, 00:42:27.541 "can_share": true 00:42:27.541 } 00:42:27.541 } 00:42:27.541 ], 00:42:27.541 "mp_policy": "active_passive" 00:42:27.541 } 00:42:27.541 } 00:42:27.541 ] 00:42:27.541 18:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2984419 00:42:27.542 18:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:42:27.542 18:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:42:27.542 Running I/O for 10 seconds... 00:42:28.482 Latency(us) 00:42:28.482 [2024-11-20T17:09:28.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:28.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:28.482 Nvme0n1 : 1.00 17132.00 66.92 0.00 0.00 0.00 0.00 0.00 00:42:28.482 [2024-11-20T17:09:28.398Z] =================================================================================================================== 00:42:28.482 [2024-11-20T17:09:28.398Z] Total : 17132.00 66.92 0.00 0.00 0.00 0.00 0.00 00:42:28.482 00:42:29.421 18:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 65695431-2401-4e70-b56f-1c83f570c16f 00:42:29.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:29.680 Nvme0n1 : 2.00 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:42:29.680 [2024-11-20T17:09:29.596Z] =================================================================================================================== 00:42:29.680 [2024-11-20T17:09:29.596Z] Total : 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:42:29.680 00:42:29.680 true 00:42:29.680 18:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:42:29.680 18:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65695431-2401-4e70-b56f-1c83f570c16f 00:42:29.940 18:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:42:29.940 18:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:42:29.940 18:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2984419 00:42:30.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:30.508 Nvme0n1 : 3.00 17572.00 68.64 0.00 0.00 0.00 0.00 0.00 00:42:30.508 [2024-11-20T17:09:30.424Z] =================================================================================================================== 00:42:30.508 [2024-11-20T17:09:30.425Z] Total : 17572.00 68.64 0.00 0.00 0.00 0.00 0.00 00:42:30.509 00:42:31.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:31.448 Nvme0n1 : 4.00 17675.25 69.04 0.00 0.00 0.00 0.00 0.00 00:42:31.448 [2024-11-20T17:09:31.364Z] =================================================================================================================== 00:42:31.448 [2024-11-20T17:09:31.364Z] Total : 17675.25 69.04 0.00 0.00 0.00 0.00 0.00 00:42:31.448 00:42:32.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:32.827 Nvme0n1 : 5.00 18748.20 73.24 0.00 0.00 0.00 0.00 0.00 00:42:32.827 [2024-11-20T17:09:32.743Z] =================================================================================================================== 00:42:32.827 [2024-11-20T17:09:32.743Z] Total : 18748.20 73.24 0.00 0.00 0.00 0.00 0.00 00:42:32.827 00:42:33.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:33.766 Nvme0n1 : 6.00 19868.83 77.61 0.00 0.00 0.00 0.00 0.00 00:42:33.766 [2024-11-20T17:09:33.682Z] =================================================================================================================== 00:42:33.766 [2024-11-20T17:09:33.682Z] Total : 19868.83 77.61 0.00 0.00 0.00 0.00 0.00 00:42:33.766 00:42:34.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:34.706 Nvme0n1 : 7.00 20669.29 80.74 0.00 0.00 0.00 0.00 0.00 00:42:34.706 [2024-11-20T17:09:34.622Z] =================================================================================================================== 00:42:34.706 [2024-11-20T17:09:34.622Z] Total : 20669.29 80.74 0.00 0.00 0.00 0.00 0.00 00:42:34.706 00:42:35.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:35.646 Nvme0n1 : 8.00 21277.50 83.12 0.00 0.00 0.00 0.00 0.00 00:42:35.646 [2024-11-20T17:09:35.562Z] =================================================================================================================== 00:42:35.646 [2024-11-20T17:09:35.562Z] Total : 21277.50 83.12 0.00 0.00 0.00 0.00 0.00 00:42:35.646 00:42:36.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:36.584 Nvme0n1 : 9.00 21750.78 84.96 0.00 0.00 0.00 0.00 0.00 00:42:36.584 [2024-11-20T17:09:36.500Z] =================================================================================================================== 00:42:36.584 [2024-11-20T17:09:36.500Z] Total : 21750.78 84.96 0.00 0.00 0.00 0.00 0.00 00:42:36.584 00:42:37.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:37.608 Nvme0n1 : 10.00 22129.20 86.44 0.00 0.00 0.00 0.00 0.00 00:42:37.608 [2024-11-20T17:09:37.524Z] =================================================================================================================== 00:42:37.608 [2024-11-20T17:09:37.524Z] Total : 22129.20 86.44 0.00 0.00 0.00 0.00 0.00 00:42:37.608 00:42:37.608 00:42:37.608 Latency(us) 00:42:37.608 [2024-11-20T17:09:37.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:37.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:37.608 Nvme0n1 : 10.00 22128.41 86.44 0.00 0.00 5780.88 4205.23 30583.47 00:42:37.608 [2024-11-20T17:09:37.524Z] =================================================================================================================== 00:42:37.608 [2024-11-20T17:09:37.524Z] Total : 22128.41 86.44 0.00 0.00 5780.88 4205.23 30583.47 00:42:37.608 { 00:42:37.608 "results": [ 00:42:37.608 { 00:42:37.608 "job": "Nvme0n1", 00:42:37.608 "core_mask": "0x2", 00:42:37.608 "workload": "randwrite", 00:42:37.608 "status": "finished", 00:42:37.608 "queue_depth": 128, 00:42:37.608 "io_size": 4096, 00:42:37.608 "runtime": 10.003294, 00:42:37.608 "iops": 22128.410901449064, 00:42:37.608 "mibps": 86.4391050837854, 00:42:37.608 "io_failed": 0, 00:42:37.608 "io_timeout": 0, 00:42:37.608 "avg_latency_us": 5780.884074383613, 00:42:37.608 "min_latency_us": 4205.2266666666665, 00:42:37.608 "max_latency_us": 30583.466666666667 00:42:37.608 } 00:42:37.608 ], 00:42:37.608 "core_count": 1 00:42:37.608 } 00:42:37.608 18:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2984279 00:42:37.608 18:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2984279 ']' 00:42:37.608 18:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2984279 00:42:37.608 18:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:42:37.608 18:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:37.608 18:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2984279 00:42:37.608 18:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:37.608 18:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:37.608 18:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2984279' 00:42:37.608 killing process with pid 2984279 00:42:37.608 18:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2984279 00:42:37.608 Received shutdown signal, test time was about 10.000000 seconds 00:42:37.608 00:42:37.608 Latency(us) 00:42:37.608 [2024-11-20T17:09:37.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:37.608 [2024-11-20T17:09:37.524Z] =================================================================================================================== 00:42:37.608 [2024-11-20T17:09:37.524Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:37.608 18:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2984279 00:42:37.868 18:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:37.868 18:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:38.128 18:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65695431-2401-4e70-b56f-1c83f570c16f 00:42:38.128 18:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:42:38.388 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:42:38.388 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:42:38.389 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:38.389 [2024-11-20 18:09:38.270943] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:42:38.649 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65695431-2401-4e70-b56f-1c83f570c16f 00:42:38.649 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:42:38.649 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65695431-2401-4e70-b56f-1c83f570c16f 00:42:38.649 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:38.649 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:38.649 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:38.649 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:38.649 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:38.649 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:38.649 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:38.649 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:42:38.649 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65695431-2401-4e70-b56f-1c83f570c16f 00:42:38.649 request: 00:42:38.649 { 00:42:38.649 "uuid": "65695431-2401-4e70-b56f-1c83f570c16f", 00:42:38.649 "method": "bdev_lvol_get_lvstores", 00:42:38.649 "req_id": 1 00:42:38.649 } 00:42:38.649 Got JSON-RPC error response 00:42:38.649 response: 00:42:38.649 { 00:42:38.649 "code": -19, 00:42:38.649 "message": "No such device" 00:42:38.649 } 00:42:38.649 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:42:38.649 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:38.649 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:38.649 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:38.649 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:38.910 aio_bdev 00:42:38.910 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3ec4e8e0-4424-46c3-a97d-76c742c96d22 00:42:38.910 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=3ec4e8e0-4424-46c3-a97d-76c742c96d22 00:42:38.910 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:42:38.910 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:42:38.910 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:42:38.910 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:42:38.910 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:39.170 18:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3ec4e8e0-4424-46c3-a97d-76c742c96d22 -t 2000 00:42:39.170 [ 00:42:39.170 { 00:42:39.170 "name": "3ec4e8e0-4424-46c3-a97d-76c742c96d22", 00:42:39.170 "aliases": [ 00:42:39.170 "lvs/lvol" 00:42:39.170 ], 00:42:39.170 "product_name": "Logical Volume", 00:42:39.170 "block_size": 4096, 00:42:39.170 "num_blocks": 38912, 00:42:39.170 "uuid": "3ec4e8e0-4424-46c3-a97d-76c742c96d22", 00:42:39.170 "assigned_rate_limits": { 00:42:39.170 "rw_ios_per_sec": 0, 00:42:39.170 "rw_mbytes_per_sec": 0, 00:42:39.170 "r_mbytes_per_sec": 0, 00:42:39.170 "w_mbytes_per_sec": 0 00:42:39.170 }, 00:42:39.170 "claimed": false, 00:42:39.170 "zoned": false, 00:42:39.170 "supported_io_types": { 00:42:39.170 "read": true, 00:42:39.170 "write": true, 00:42:39.170 "unmap": true, 00:42:39.170 "flush": false, 00:42:39.170 "reset": true, 00:42:39.170 "nvme_admin": false, 00:42:39.170 "nvme_io": false, 00:42:39.170 "nvme_io_md": false, 00:42:39.170 "write_zeroes": true, 00:42:39.170 "zcopy": false, 00:42:39.170 "get_zone_info": false, 00:42:39.170 "zone_management": false, 00:42:39.170 "zone_append": false, 00:42:39.170 "compare": false, 00:42:39.171 "compare_and_write": false, 00:42:39.171 "abort": false, 00:42:39.171 "seek_hole": true, 00:42:39.171 "seek_data": true, 00:42:39.171 "copy": false, 00:42:39.171 "nvme_iov_md": false 00:42:39.171 }, 00:42:39.171 "driver_specific": { 00:42:39.171 "lvol": { 00:42:39.171 "lvol_store_uuid": "65695431-2401-4e70-b56f-1c83f570c16f", 00:42:39.171 "base_bdev": "aio_bdev", 00:42:39.171 "thin_provision": false, 00:42:39.171 "num_allocated_clusters": 38, 00:42:39.171 "snapshot": false, 00:42:39.171 "clone": false, 00:42:39.171 "esnap_clone": false 00:42:39.171 } 00:42:39.171 } 00:42:39.171 } 00:42:39.171 ] 00:42:39.171 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:42:39.171 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65695431-2401-4e70-b56f-1c83f570c16f 00:42:39.171 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:42:39.451 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:42:39.451 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:42:39.451 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65695431-2401-4e70-b56f-1c83f570c16f 00:42:39.711 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:42:39.711 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3ec4e8e0-4424-46c3-a97d-76c742c96d22 00:42:39.711 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 65695431-2401-4e70-b56f-1c83f570c16f 00:42:39.971 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:40.232 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:40.232 00:42:40.232 real 0m15.184s 00:42:40.232 user 0m14.834s 00:42:40.232 sys 0m1.288s 00:42:40.232 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:40.232 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:42:40.232 ************************************ 00:42:40.232 END TEST lvs_grow_clean 00:42:40.232 ************************************ 00:42:40.232 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:42:40.232 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:40.232 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:40.232 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:40.232 ************************************ 00:42:40.232 START TEST lvs_grow_dirty 00:42:40.232 ************************************ 00:42:40.232 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:42:40.232 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:42:40.232 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:42:40.232 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:42:40.232 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:42:40.232 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:42:40.232 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:42:40.232 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:40.232 18:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:40.232 18:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:40.492 18:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:42:40.492 18:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:42:40.753 18:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=856c1343-dc92-473c-888a-cc983df43d9b 00:42:40.753 18:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 856c1343-dc92-473c-888a-cc983df43d9b 00:42:40.753 18:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:42:40.753 18:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:42:40.753 18:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:42:40.753 18:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 856c1343-dc92-473c-888a-cc983df43d9b lvol 150 00:42:41.012 18:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ad9e12bf-fa37-42e9-9873-1236c393ce67 00:42:41.012 18:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:41.012 18:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:42:41.012 [2024-11-20 18:09:40.890864] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:42:41.012 [2024-11-20 18:09:40.891004] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:42:41.012 true 00:42:41.012 18:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 856c1343-dc92-473c-888a-cc983df43d9b 00:42:41.012 18:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:42:41.273 18:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:42:41.273 18:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:42:41.532 18:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ad9e12bf-fa37-42e9-9873-1236c393ce67 00:42:41.532 18:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:41.792 [2024-11-20 18:09:41.535396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:41.792 18:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:41.792 18:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2987099 00:42:41.792 18:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:41.792 18:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:42:41.792 18:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2987099 /var/tmp/bdevperf.sock 00:42:41.792 18:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2987099 ']' 00:42:41.792 18:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:41.792 18:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:41.792 18:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:41.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:41.792 18:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:42.052 18:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:42.052 [2024-11-20 18:09:41.752574] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:42:42.052 [2024-11-20 18:09:41.752634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2987099 ] 00:42:42.052 [2024-11-20 18:09:41.828203] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:42.052 [2024-11-20 18:09:41.856703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:42.621 18:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:42.621 18:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:42:42.621 18:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:42:43.191 Nvme0n1 00:42:43.191 18:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:42:43.191 [ 00:42:43.191 { 00:42:43.191 "name": "Nvme0n1", 00:42:43.191 "aliases": [ 00:42:43.191 "ad9e12bf-fa37-42e9-9873-1236c393ce67" 00:42:43.191 ], 00:42:43.191 "product_name": "NVMe disk", 00:42:43.191 "block_size": 4096, 00:42:43.191 "num_blocks": 38912, 00:42:43.191 "uuid": "ad9e12bf-fa37-42e9-9873-1236c393ce67", 00:42:43.191 "numa_id": 0, 00:42:43.191 "assigned_rate_limits": { 00:42:43.191 "rw_ios_per_sec": 0, 00:42:43.191 "rw_mbytes_per_sec": 0, 00:42:43.192 "r_mbytes_per_sec": 0, 00:42:43.192 "w_mbytes_per_sec": 0 00:42:43.192 }, 00:42:43.192 "claimed": false, 00:42:43.192 "zoned": false, 00:42:43.192 "supported_io_types": { 00:42:43.192 "read": true, 00:42:43.192 "write": true, 00:42:43.192 "unmap": true, 00:42:43.192 "flush": true, 00:42:43.192 "reset": true, 00:42:43.192 "nvme_admin": true, 00:42:43.192 "nvme_io": true, 00:42:43.192 "nvme_io_md": false, 00:42:43.192 "write_zeroes": true, 00:42:43.192 "zcopy": false, 00:42:43.192 "get_zone_info": false, 00:42:43.192 "zone_management": false, 00:42:43.192 "zone_append": false, 00:42:43.192 "compare": true, 00:42:43.192 "compare_and_write": true, 00:42:43.192 "abort": true, 00:42:43.192 "seek_hole": false, 00:42:43.192 "seek_data": false, 00:42:43.192 "copy": true, 00:42:43.192 "nvme_iov_md": false 00:42:43.192 }, 00:42:43.192 "memory_domains": [ 00:42:43.192 { 00:42:43.192 "dma_device_id": "system", 00:42:43.192 "dma_device_type": 1 00:42:43.192 } 00:42:43.192 ], 00:42:43.192 "driver_specific": { 00:42:43.192 "nvme": [ 00:42:43.192 { 00:42:43.192 "trid": { 00:42:43.192 "trtype": "TCP", 00:42:43.192 "adrfam": "IPv4", 00:42:43.192 "traddr": "10.0.0.2", 00:42:43.192 "trsvcid": "4420", 00:42:43.192 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:42:43.192 }, 00:42:43.192 "ctrlr_data": { 00:42:43.192 "cntlid": 1, 00:42:43.192 "vendor_id": "0x8086", 00:42:43.192 "model_number": "SPDK bdev Controller", 00:42:43.192 "serial_number": "SPDK0", 00:42:43.192 "firmware_revision": "24.09.1", 00:42:43.192 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:43.192 "oacs": { 00:42:43.192 "security": 0, 00:42:43.192 "format": 0, 00:42:43.192 "firmware": 0, 00:42:43.192 "ns_manage": 0 00:42:43.192 }, 00:42:43.192 "multi_ctrlr": true, 00:42:43.192 "ana_reporting": false 00:42:43.192 }, 00:42:43.192 "vs": { 00:42:43.192 "nvme_version": "1.3" 00:42:43.192 }, 00:42:43.192 "ns_data": { 00:42:43.192 "id": 1, 00:42:43.192 "can_share": true 00:42:43.192 } 00:42:43.192 } 00:42:43.192 ], 00:42:43.192 "mp_policy": "active_passive" 00:42:43.192 } 00:42:43.192 } 00:42:43.192 ] 00:42:43.192 18:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2987319 00:42:43.192 18:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:42:43.192 18:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:42:43.192 Running I/O for 10 seconds... 00:42:44.574 Latency(us) 00:42:44.574 [2024-11-20T17:09:44.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:44.574 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:44.574 Nvme0n1 : 1.00 17408.00 68.00 0.00 0.00 0.00 0.00 0.00 00:42:44.574 [2024-11-20T17:09:44.490Z] =================================================================================================================== 00:42:44.574 [2024-11-20T17:09:44.490Z] Total : 17408.00 68.00 0.00 0.00 0.00 0.00 0.00 00:42:44.574 00:42:45.144 18:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 856c1343-dc92-473c-888a-cc983df43d9b 00:42:45.404 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:45.404 Nvme0n1 : 2.00 17664.00 69.00 0.00 0.00 0.00 0.00 0.00 00:42:45.404 [2024-11-20T17:09:45.320Z] =================================================================================================================== 00:42:45.404 [2024-11-20T17:09:45.320Z] Total : 17664.00 69.00 0.00 0.00 0.00 0.00 0.00 00:42:45.404 00:42:45.404 true 00:42:45.404 18:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 856c1343-dc92-473c-888a-cc983df43d9b 00:42:45.404 18:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:42:45.663 18:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:42:45.663 18:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:42:45.663 18:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2987319 00:42:46.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:46.233 Nvme0n1 : 3.00 17770.33 69.42 0.00 0.00 0.00 0.00 0.00 00:42:46.233 [2024-11-20T17:09:46.149Z] =================================================================================================================== 00:42:46.233 [2024-11-20T17:09:46.149Z] Total : 17770.33 69.42 0.00 0.00 0.00 0.00 0.00 00:42:46.233 00:42:47.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:47.612 Nvme0n1 : 4.00 17824.00 69.62 0.00 0.00 0.00 0.00 0.00 00:42:47.612 [2024-11-20T17:09:47.528Z] =================================================================================================================== 00:42:47.612 [2024-11-20T17:09:47.528Z] Total : 17824.00 69.62 0.00 0.00 0.00 0.00 0.00 00:42:47.612 00:42:48.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:48.552 Nvme0n1 : 5.00 18675.00 72.95 0.00 0.00 0.00 0.00 0.00 00:42:48.552 [2024-11-20T17:09:48.468Z] =================================================================================================================== 00:42:48.552 [2024-11-20T17:09:48.468Z] Total : 18675.00 72.95 0.00 0.00 0.00 0.00 0.00 00:42:48.552 00:42:49.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:49.492 Nvme0n1 : 6.00 19797.33 77.33 0.00 0.00 0.00 0.00 0.00 00:42:49.492 [2024-11-20T17:09:49.408Z] =================================================================================================================== 00:42:49.492 [2024-11-20T17:09:49.408Z] Total : 19797.33 77.33 0.00 0.00 0.00 0.00 0.00 00:42:49.492 00:42:50.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:50.430 Nvme0n1 : 7.00 20601.14 80.47 0.00 0.00 0.00 0.00 0.00 00:42:50.430 [2024-11-20T17:09:50.346Z] =================================================================================================================== 00:42:50.430 [2024-11-20T17:09:50.346Z] Total : 20601.14 80.47 0.00 0.00 0.00 0.00 0.00 00:42:50.430 00:42:51.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:51.369 Nvme0n1 : 8.00 21207.75 82.84 0.00 0.00 0.00 0.00 0.00 00:42:51.369 [2024-11-20T17:09:51.285Z] =================================================================================================================== 00:42:51.369 [2024-11-20T17:09:51.285Z] Total : 21207.75 82.84 0.00 0.00 0.00 0.00 0.00 00:42:51.369 00:42:52.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:52.306 Nvme0n1 : 9.00 21681.56 84.69 0.00 0.00 0.00 0.00 0.00 00:42:52.306 [2024-11-20T17:09:52.222Z] =================================================================================================================== 00:42:52.306 [2024-11-20T17:09:52.222Z] Total : 21681.56 84.69 0.00 0.00 0.00 0.00 0.00 00:42:52.306 00:42:53.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:53.247 Nvme0n1 : 10.00 22060.60 86.17 0.00 0.00 0.00 0.00 0.00 00:42:53.247 [2024-11-20T17:09:53.163Z] =================================================================================================================== 00:42:53.247 [2024-11-20T17:09:53.163Z] Total : 22060.60 86.17 0.00 0.00 0.00 0.00 0.00 00:42:53.247 00:42:53.247 00:42:53.247 Latency(us) 00:42:53.247 [2024-11-20T17:09:53.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:53.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:53.247 Nvme0n1 : 10.01 22062.06 86.18 0.00 0.00 5798.68 3454.29 24685.23 00:42:53.247 [2024-11-20T17:09:53.163Z] =================================================================================================================== 00:42:53.247 [2024-11-20T17:09:53.163Z] Total : 22062.06 86.18 0.00 0.00 5798.68 3454.29 24685.23 00:42:53.247 { 00:42:53.247 "results": [ 00:42:53.247 { 00:42:53.247 "job": "Nvme0n1", 00:42:53.247 "core_mask": "0x2", 00:42:53.247 "workload": "randwrite", 00:42:53.247 "status": "finished", 00:42:53.247 "queue_depth": 128, 00:42:53.247 "io_size": 4096, 00:42:53.247 "runtime": 10.005139, 00:42:53.247 "iops": 22062.062306180855, 00:42:53.247 "mibps": 86.17993088351896, 00:42:53.247 "io_failed": 0, 00:42:53.247 "io_timeout": 0, 00:42:53.247 "avg_latency_us": 5798.675212699448, 00:42:53.247 "min_latency_us": 3454.2933333333335, 00:42:53.247 "max_latency_us": 24685.226666666666 00:42:53.247 } 00:42:53.247 ], 00:42:53.247 "core_count": 1 00:42:53.247 } 00:42:53.247 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2987099 00:42:53.247 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2987099 ']' 00:42:53.247 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2987099 00:42:53.247 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:42:53.247 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:53.247 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2987099 00:42:53.507 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:53.507 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:53.507 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2987099' 00:42:53.507 killing process with pid 2987099 00:42:53.507 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2987099 00:42:53.507 Received shutdown signal, test time was about 10.000000 seconds 00:42:53.507 00:42:53.507 Latency(us) 00:42:53.507 [2024-11-20T17:09:53.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:53.507 [2024-11-20T17:09:53.423Z] =================================================================================================================== 00:42:53.507 [2024-11-20T17:09:53.423Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:53.507 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2987099 00:42:53.508 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:53.768 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:53.768 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 856c1343-dc92-473c-888a-cc983df43d9b 00:42:53.768 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2983740 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2983740 00:42:54.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2983740 Killed "${NVMF_APP[@]}" "$@" 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=2989312 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 2989312 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2989312 ']' 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:54.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:54.028 18:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:54.028 [2024-11-20 18:09:53.932928] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:54.028 [2024-11-20 18:09:53.933897] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:42:54.028 [2024-11-20 18:09:53.933938] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:54.288 [2024-11-20 18:09:54.014628] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:54.288 [2024-11-20 18:09:54.042413] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:54.288 [2024-11-20 18:09:54.042444] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:54.288 [2024-11-20 18:09:54.042450] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:54.288 [2024-11-20 18:09:54.042455] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:54.288 [2024-11-20 18:09:54.042459] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:54.288 [2024-11-20 18:09:54.042480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:54.288 [2024-11-20 18:09:54.086499] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:54.288 [2024-11-20 18:09:54.086698] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:54.857 18:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:54.857 18:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:42:54.857 18:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:54.857 18:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:54.857 18:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:54.857 18:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:54.857 18:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:55.116 [2024-11-20 18:09:54.924482] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:42:55.116 [2024-11-20 18:09:54.924747] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:42:55.116 [2024-11-20 18:09:54.924835] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:42:55.116 18:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:42:55.116 18:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ad9e12bf-fa37-42e9-9873-1236c393ce67 00:42:55.116 18:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ad9e12bf-fa37-42e9-9873-1236c393ce67 00:42:55.116 18:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:42:55.116 18:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:42:55.116 18:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:42:55.116 18:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:42:55.116 18:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:55.376 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ad9e12bf-fa37-42e9-9873-1236c393ce67 -t 2000 00:42:55.376 [ 00:42:55.376 { 00:42:55.376 "name": "ad9e12bf-fa37-42e9-9873-1236c393ce67", 00:42:55.376 "aliases": [ 00:42:55.376 "lvs/lvol" 00:42:55.376 ], 00:42:55.376 "product_name": "Logical Volume", 00:42:55.376 "block_size": 4096, 00:42:55.376 "num_blocks": 38912, 00:42:55.376 "uuid": "ad9e12bf-fa37-42e9-9873-1236c393ce67", 00:42:55.376 "assigned_rate_limits": { 00:42:55.376 "rw_ios_per_sec": 0, 00:42:55.376 "rw_mbytes_per_sec": 0, 00:42:55.376 "r_mbytes_per_sec": 0, 00:42:55.376 "w_mbytes_per_sec": 0 00:42:55.376 }, 00:42:55.376 "claimed": false, 00:42:55.376 "zoned": false, 00:42:55.376 "supported_io_types": { 00:42:55.376 "read": true, 00:42:55.376 "write": true, 00:42:55.376 "unmap": true, 00:42:55.376 "flush": false, 00:42:55.376 "reset": true, 00:42:55.376 "nvme_admin": false, 00:42:55.376 "nvme_io": false, 00:42:55.376 "nvme_io_md": false, 00:42:55.376 "write_zeroes": true, 00:42:55.376 "zcopy": false, 00:42:55.376 "get_zone_info": false, 00:42:55.376 "zone_management": false, 00:42:55.376 "zone_append": false, 00:42:55.376 "compare": false, 00:42:55.376 "compare_and_write": false, 00:42:55.376 "abort": false, 00:42:55.376 "seek_hole": true, 00:42:55.376 "seek_data": true, 00:42:55.376 "copy": false, 00:42:55.376 "nvme_iov_md": false 00:42:55.376 }, 00:42:55.376 "driver_specific": { 00:42:55.376 "lvol": { 00:42:55.376 "lvol_store_uuid": "856c1343-dc92-473c-888a-cc983df43d9b", 00:42:55.376 "base_bdev": "aio_bdev", 00:42:55.376 "thin_provision": false, 00:42:55.376 "num_allocated_clusters": 38, 00:42:55.376 "snapshot": false, 00:42:55.376 "clone": false, 00:42:55.376 "esnap_clone": false 00:42:55.376 } 00:42:55.376 } 00:42:55.376 } 00:42:55.376 ] 00:42:55.636 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:42:55.636 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 856c1343-dc92-473c-888a-cc983df43d9b 00:42:55.636 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:42:55.636 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:42:55.636 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 856c1343-dc92-473c-888a-cc983df43d9b 00:42:55.636 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:42:55.896 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:42:55.896 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:55.896 [2024-11-20 18:09:55.806954] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:42:56.156 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 856c1343-dc92-473c-888a-cc983df43d9b 00:42:56.156 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:42:56.156 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 856c1343-dc92-473c-888a-cc983df43d9b 00:42:56.156 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:56.156 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:56.156 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:56.156 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:56.156 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:56.156 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:56.156 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:56.156 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:42:56.156 18:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 856c1343-dc92-473c-888a-cc983df43d9b 00:42:56.156 request: 00:42:56.156 { 00:42:56.156 "uuid": "856c1343-dc92-473c-888a-cc983df43d9b", 00:42:56.156 "method": "bdev_lvol_get_lvstores", 00:42:56.156 "req_id": 1 00:42:56.156 } 00:42:56.156 Got JSON-RPC error response 00:42:56.156 response: 00:42:56.156 { 00:42:56.156 "code": -19, 00:42:56.156 "message": "No such device" 00:42:56.156 } 00:42:56.156 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:42:56.156 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:56.156 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:56.156 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:56.156 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:56.415 aio_bdev 00:42:56.415 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ad9e12bf-fa37-42e9-9873-1236c393ce67 00:42:56.415 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ad9e12bf-fa37-42e9-9873-1236c393ce67 00:42:56.415 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:42:56.415 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:42:56.415 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:42:56.415 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:42:56.415 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:56.674 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ad9e12bf-fa37-42e9-9873-1236c393ce67 -t 2000 00:42:56.674 [ 00:42:56.674 { 00:42:56.674 "name": "ad9e12bf-fa37-42e9-9873-1236c393ce67", 00:42:56.674 "aliases": [ 00:42:56.674 "lvs/lvol" 00:42:56.674 ], 00:42:56.674 "product_name": "Logical Volume", 00:42:56.674 "block_size": 4096, 00:42:56.674 "num_blocks": 38912, 00:42:56.674 "uuid": "ad9e12bf-fa37-42e9-9873-1236c393ce67", 00:42:56.674 "assigned_rate_limits": { 00:42:56.674 "rw_ios_per_sec": 0, 00:42:56.674 "rw_mbytes_per_sec": 0, 00:42:56.674 "r_mbytes_per_sec": 0, 00:42:56.674 "w_mbytes_per_sec": 0 00:42:56.674 }, 00:42:56.674 "claimed": false, 00:42:56.674 "zoned": false, 00:42:56.674 "supported_io_types": { 00:42:56.674 "read": true, 00:42:56.674 "write": true, 00:42:56.674 "unmap": true, 00:42:56.674 "flush": false, 00:42:56.674 "reset": true, 00:42:56.674 "nvme_admin": false, 00:42:56.674 "nvme_io": false, 00:42:56.674 "nvme_io_md": false, 00:42:56.674 "write_zeroes": true, 00:42:56.674 "zcopy": false, 00:42:56.674 "get_zone_info": false, 00:42:56.674 "zone_management": false, 00:42:56.674 "zone_append": false, 00:42:56.674 "compare": false, 00:42:56.674 "compare_and_write": false, 00:42:56.674 "abort": false, 00:42:56.674 "seek_hole": true, 00:42:56.674 "seek_data": true, 00:42:56.674 "copy": false, 00:42:56.674 "nvme_iov_md": false 00:42:56.674 }, 00:42:56.674 "driver_specific": { 00:42:56.674 "lvol": { 00:42:56.674 "lvol_store_uuid": "856c1343-dc92-473c-888a-cc983df43d9b", 00:42:56.674 "base_bdev": "aio_bdev", 00:42:56.674 "thin_provision": false, 00:42:56.674 "num_allocated_clusters": 38, 00:42:56.674 "snapshot": false, 00:42:56.674 "clone": false, 00:42:56.674 "esnap_clone": false 00:42:56.674 } 00:42:56.674 } 00:42:56.674 } 00:42:56.674 ] 00:42:56.674 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:42:56.674 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 856c1343-dc92-473c-888a-cc983df43d9b 00:42:56.674 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:42:56.933 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:42:56.933 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 856c1343-dc92-473c-888a-cc983df43d9b 00:42:56.933 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:42:57.193 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:42:57.193 18:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ad9e12bf-fa37-42e9-9873-1236c393ce67 00:42:57.193 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 856c1343-dc92-473c-888a-cc983df43d9b 00:42:57.452 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:57.712 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:57.712 00:42:57.712 real 0m17.469s 00:42:57.712 user 0m35.346s 00:42:57.712 sys 0m2.997s 00:42:57.712 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:57.712 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:57.712 ************************************ 00:42:57.712 END TEST lvs_grow_dirty 00:42:57.712 ************************************ 00:42:57.712 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:42:57.712 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:42:57.712 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:42:57.712 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:42:57.712 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:42:57.712 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:42:57.712 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:42:57.712 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:42:57.712 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:42:57.712 nvmf_trace.0 00:42:57.713 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:42:57.713 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:42:57.713 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:57.713 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:42:57.713 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:57.713 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:42:57.713 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:57.713 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:57.713 rmmod nvme_tcp 00:42:57.713 rmmod nvme_fabrics 00:42:57.713 rmmod nvme_keyring 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 2989312 ']' 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 2989312 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2989312 ']' 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2989312 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2989312 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2989312' 00:42:57.973 killing process with pid 2989312 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2989312 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2989312 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:57.973 18:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:00.515 18:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:00.515 00:43:00.515 real 0m43.783s 00:43:00.515 user 0m53.056s 00:43:00.515 sys 0m10.240s 00:43:00.515 18:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:00.515 18:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:43:00.515 ************************************ 00:43:00.515 END TEST nvmf_lvs_grow 00:43:00.515 ************************************ 00:43:00.515 18:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:43:00.515 18:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:00.515 18:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:00.516 18:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:00.516 ************************************ 00:43:00.516 START TEST nvmf_bdev_io_wait 00:43:00.516 ************************************ 00:43:00.516 18:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:43:00.516 * Looking for test storage... 00:43:00.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:00.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:00.516 --rc genhtml_branch_coverage=1 00:43:00.516 --rc genhtml_function_coverage=1 00:43:00.516 --rc genhtml_legend=1 00:43:00.516 --rc geninfo_all_blocks=1 00:43:00.516 --rc geninfo_unexecuted_blocks=1 00:43:00.516 00:43:00.516 ' 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:00.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:00.516 --rc genhtml_branch_coverage=1 00:43:00.516 --rc genhtml_function_coverage=1 00:43:00.516 --rc genhtml_legend=1 00:43:00.516 --rc geninfo_all_blocks=1 00:43:00.516 --rc geninfo_unexecuted_blocks=1 00:43:00.516 00:43:00.516 ' 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:00.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:00.516 --rc genhtml_branch_coverage=1 00:43:00.516 --rc genhtml_function_coverage=1 00:43:00.516 --rc genhtml_legend=1 00:43:00.516 --rc geninfo_all_blocks=1 00:43:00.516 --rc geninfo_unexecuted_blocks=1 00:43:00.516 00:43:00.516 ' 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:00.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:00.516 --rc genhtml_branch_coverage=1 00:43:00.516 --rc genhtml_function_coverage=1 00:43:00.516 --rc genhtml_legend=1 00:43:00.516 --rc geninfo_all_blocks=1 00:43:00.516 --rc geninfo_unexecuted_blocks=1 00:43:00.516 00:43:00.516 ' 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:00.516 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:43:00.517 18:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:08.652 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:08.653 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:08.653 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:08.653 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:08.653 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:08.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:08.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:43:08.653 00:43:08.653 --- 10.0.0.2 ping statistics --- 00:43:08.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:08.653 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:08.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:08.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:43:08.653 00:43:08.653 --- 10.0.0.1 ping statistics --- 00:43:08.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:08.653 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=2994262 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 2994262 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2994262 ']' 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:08.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:08.653 [2024-11-20 18:10:07.507793] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:08.653 [2024-11-20 18:10:07.508763] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:43:08.653 [2024-11-20 18:10:07.508802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:08.653 [2024-11-20 18:10:07.567519] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:08.653 [2024-11-20 18:10:07.598233] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:08.653 [2024-11-20 18:10:07.598266] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:08.653 [2024-11-20 18:10:07.598272] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:08.653 [2024-11-20 18:10:07.598279] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:08.653 [2024-11-20 18:10:07.598284] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:08.653 [2024-11-20 18:10:07.602175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:08.653 [2024-11-20 18:10:07.602483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:08.653 [2024-11-20 18:10:07.602691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:08.653 [2024-11-20 18:10:07.602691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:43:08.653 [2024-11-20 18:10:07.602967] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:08.653 [2024-11-20 18:10:07.783902] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:08.653 [2024-11-20 18:10:07.784289] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:08.653 [2024-11-20 18:10:07.784823] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:08.653 [2024-11-20 18:10:07.784956] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:08.653 [2024-11-20 18:10:07.795495] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:08.653 Malloc0 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:08.653 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:08.654 [2024-11-20 18:10:07.883766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2994353 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2994355 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:08.654 { 00:43:08.654 "params": { 00:43:08.654 "name": "Nvme$subsystem", 00:43:08.654 "trtype": "$TEST_TRANSPORT", 00:43:08.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:08.654 "adrfam": "ipv4", 00:43:08.654 "trsvcid": "$NVMF_PORT", 00:43:08.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:08.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:08.654 "hdgst": ${hdgst:-false}, 00:43:08.654 "ddgst": ${ddgst:-false} 00:43:08.654 }, 00:43:08.654 "method": "bdev_nvme_attach_controller" 00:43:08.654 } 00:43:08.654 EOF 00:43:08.654 )") 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2994357 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:08.654 { 00:43:08.654 "params": { 00:43:08.654 "name": "Nvme$subsystem", 00:43:08.654 "trtype": "$TEST_TRANSPORT", 00:43:08.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:08.654 "adrfam": "ipv4", 00:43:08.654 "trsvcid": "$NVMF_PORT", 00:43:08.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:08.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:08.654 "hdgst": ${hdgst:-false}, 00:43:08.654 "ddgst": ${ddgst:-false} 00:43:08.654 }, 00:43:08.654 "method": "bdev_nvme_attach_controller" 00:43:08.654 } 00:43:08.654 EOF 00:43:08.654 )") 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2994360 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:08.654 { 00:43:08.654 "params": { 00:43:08.654 "name": "Nvme$subsystem", 00:43:08.654 "trtype": "$TEST_TRANSPORT", 00:43:08.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:08.654 "adrfam": "ipv4", 00:43:08.654 "trsvcid": "$NVMF_PORT", 00:43:08.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:08.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:08.654 "hdgst": ${hdgst:-false}, 00:43:08.654 "ddgst": ${ddgst:-false} 00:43:08.654 }, 00:43:08.654 "method": "bdev_nvme_attach_controller" 00:43:08.654 } 00:43:08.654 EOF 00:43:08.654 )") 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:08.654 { 00:43:08.654 "params": { 00:43:08.654 "name": "Nvme$subsystem", 00:43:08.654 "trtype": "$TEST_TRANSPORT", 00:43:08.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:08.654 "adrfam": "ipv4", 00:43:08.654 "trsvcid": "$NVMF_PORT", 00:43:08.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:08.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:08.654 "hdgst": ${hdgst:-false}, 00:43:08.654 "ddgst": ${ddgst:-false} 00:43:08.654 }, 00:43:08.654 "method": "bdev_nvme_attach_controller" 00:43:08.654 } 00:43:08.654 EOF 00:43:08.654 )") 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2994353 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:43:08.654 "params": { 00:43:08.654 "name": "Nvme1", 00:43:08.654 "trtype": "tcp", 00:43:08.654 "traddr": "10.0.0.2", 00:43:08.654 "adrfam": "ipv4", 00:43:08.654 "trsvcid": "4420", 00:43:08.654 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:08.654 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:08.654 "hdgst": false, 00:43:08.654 "ddgst": false 00:43:08.654 }, 00:43:08.654 "method": "bdev_nvme_attach_controller" 00:43:08.654 }' 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:43:08.654 "params": { 00:43:08.654 "name": "Nvme1", 00:43:08.654 "trtype": "tcp", 00:43:08.654 "traddr": "10.0.0.2", 00:43:08.654 "adrfam": "ipv4", 00:43:08.654 "trsvcid": "4420", 00:43:08.654 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:08.654 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:08.654 "hdgst": false, 00:43:08.654 "ddgst": false 00:43:08.654 }, 00:43:08.654 "method": "bdev_nvme_attach_controller" 00:43:08.654 }' 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:43:08.654 "params": { 00:43:08.654 "name": "Nvme1", 00:43:08.654 "trtype": "tcp", 00:43:08.654 "traddr": "10.0.0.2", 00:43:08.654 "adrfam": "ipv4", 00:43:08.654 "trsvcid": "4420", 00:43:08.654 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:08.654 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:08.654 "hdgst": false, 00:43:08.654 "ddgst": false 00:43:08.654 }, 00:43:08.654 "method": "bdev_nvme_attach_controller" 00:43:08.654 }' 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:43:08.654 18:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:43:08.654 "params": { 00:43:08.654 "name": "Nvme1", 00:43:08.654 "trtype": "tcp", 00:43:08.654 "traddr": "10.0.0.2", 00:43:08.654 "adrfam": "ipv4", 00:43:08.654 "trsvcid": "4420", 00:43:08.654 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:08.654 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:08.654 "hdgst": false, 00:43:08.654 "ddgst": false 00:43:08.654 }, 00:43:08.654 "method": "bdev_nvme_attach_controller" 00:43:08.654 }' 00:43:08.654 [2024-11-20 18:10:07.938879] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:43:08.654 [2024-11-20 18:10:07.938879] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:43:08.654 [2024-11-20 18:10:07.938944] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 18:10:07.938945] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:43:08.654 --proc-type=auto ] 00:43:08.654 [2024-11-20 18:10:07.943467] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:43:08.654 [2024-11-20 18:10:07.943527] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:43:08.654 [2024-11-20 18:10:07.949247] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:43:08.654 [2024-11-20 18:10:07.949319] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:43:08.654 [2024-11-20 18:10:08.145859] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:08.654 [2024-11-20 18:10:08.173626] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:43:08.654 [2024-11-20 18:10:08.237552] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:08.654 [2024-11-20 18:10:08.266326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:43:08.654 [2024-11-20 18:10:08.331334] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:08.654 [2024-11-20 18:10:08.362049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:43:08.654 [2024-11-20 18:10:08.397654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:08.654 [2024-11-20 18:10:08.423736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:43:08.916 Running I/O for 1 seconds... 00:43:08.916 Running I/O for 1 seconds... 00:43:09.177 Running I/O for 1 seconds... 00:43:09.177 Running I/O for 1 seconds... 00:43:09.756 13844.00 IOPS, 54.08 MiB/s 00:43:09.756 Latency(us) 00:43:09.756 [2024-11-20T17:10:09.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:09.756 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:43:09.756 Nvme1n1 : 1.01 13907.13 54.32 0.00 0.00 9174.36 2239.15 13817.17 00:43:09.756 [2024-11-20T17:10:09.672Z] =================================================================================================================== 00:43:09.756 [2024-11-20T17:10:09.672Z] Total : 13907.13 54.32 0.00 0.00 9174.36 2239.15 13817.17 00:43:09.756 8004.00 IOPS, 31.27 MiB/s 00:43:09.756 Latency(us) 00:43:09.756 [2024-11-20T17:10:09.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:09.756 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:43:09.756 Nvme1n1 : 1.02 8011.80 31.30 0.00 0.00 15820.01 5461.33 29054.29 00:43:09.756 [2024-11-20T17:10:09.672Z] =================================================================================================================== 00:43:09.756 [2024-11-20T17:10:09.672Z] Total : 8011.80 31.30 0.00 0.00 15820.01 5461.33 29054.29 00:43:10.016 18:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2994355 00:43:10.016 18:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2994357 00:43:10.277 9759.00 IOPS, 38.12 MiB/s 00:43:10.277 Latency(us) 00:43:10.277 [2024-11-20T17:10:10.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:10.277 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:43:10.277 Nvme1n1 : 1.01 9862.38 38.52 0.00 0.00 12940.35 3986.77 37573.97 00:43:10.277 [2024-11-20T17:10:10.193Z] =================================================================================================================== 00:43:10.277 [2024-11-20T17:10:10.193Z] Total : 9862.38 38.52 0.00 0.00 12940.35 3986.77 37573.97 00:43:10.277 188512.00 IOPS, 736.38 MiB/s 00:43:10.277 Latency(us) 00:43:10.277 [2024-11-20T17:10:10.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:10.277 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:43:10.277 Nvme1n1 : 1.00 188136.58 734.91 0.00 0.00 677.04 308.91 1966.08 00:43:10.277 [2024-11-20T17:10:10.193Z] =================================================================================================================== 00:43:10.277 [2024-11-20T17:10:10.193Z] Total : 188136.58 734.91 0.00 0.00 677.04 308.91 1966.08 00:43:10.277 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2994360 00:43:10.277 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:10.277 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:10.277 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:10.537 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:10.537 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:43:10.537 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:43:10.537 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:43:10.537 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:43:10.537 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:10.537 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:43:10.537 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:10.537 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:10.537 rmmod nvme_tcp 00:43:10.537 rmmod nvme_fabrics 00:43:10.537 rmmod nvme_keyring 00:43:10.537 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:10.537 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:43:10.537 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:43:10.537 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 2994262 ']' 00:43:10.537 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 2994262 00:43:10.537 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2994262 ']' 00:43:10.537 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2994262 00:43:10.538 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:43:10.538 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:10.538 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2994262 00:43:10.538 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:10.538 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:10.538 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2994262' 00:43:10.538 killing process with pid 2994262 00:43:10.538 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2994262 00:43:10.538 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2994262 00:43:10.538 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:43:10.798 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:43:10.798 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:43:10.798 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:43:10.798 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:43:10.798 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:43:10.798 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:43:10.798 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:10.798 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:10.798 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:10.798 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:10.798 18:10:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:12.709 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:12.709 00:43:12.709 real 0m12.553s 00:43:12.709 user 0m17.111s 00:43:12.709 sys 0m7.770s 00:43:12.709 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:12.709 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:43:12.709 ************************************ 00:43:12.709 END TEST nvmf_bdev_io_wait 00:43:12.709 ************************************ 00:43:12.709 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:43:12.709 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:12.709 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:12.709 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:12.709 ************************************ 00:43:12.709 START TEST nvmf_queue_depth 00:43:12.709 ************************************ 00:43:12.709 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:43:12.971 * Looking for test storage... 00:43:12.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:43:12.971 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:12.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:12.972 --rc genhtml_branch_coverage=1 00:43:12.972 --rc genhtml_function_coverage=1 00:43:12.972 --rc genhtml_legend=1 00:43:12.972 --rc geninfo_all_blocks=1 00:43:12.972 --rc geninfo_unexecuted_blocks=1 00:43:12.972 00:43:12.972 ' 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:12.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:12.972 --rc genhtml_branch_coverage=1 00:43:12.972 --rc genhtml_function_coverage=1 00:43:12.972 --rc genhtml_legend=1 00:43:12.972 --rc geninfo_all_blocks=1 00:43:12.972 --rc geninfo_unexecuted_blocks=1 00:43:12.972 00:43:12.972 ' 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:12.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:12.972 --rc genhtml_branch_coverage=1 00:43:12.972 --rc genhtml_function_coverage=1 00:43:12.972 --rc genhtml_legend=1 00:43:12.972 --rc geninfo_all_blocks=1 00:43:12.972 --rc geninfo_unexecuted_blocks=1 00:43:12.972 00:43:12.972 ' 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:12.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:12.972 --rc genhtml_branch_coverage=1 00:43:12.972 --rc genhtml_function_coverage=1 00:43:12.972 --rc genhtml_legend=1 00:43:12.972 --rc geninfo_all_blocks=1 00:43:12.972 --rc geninfo_unexecuted_blocks=1 00:43:12.972 00:43:12.972 ' 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:43:12.972 18:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:21.110 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:21.110 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:21.110 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:21.110 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:21.111 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:21.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:21.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:43:21.111 00:43:21.111 --- 10.0.0.2 ping statistics --- 00:43:21.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:21.111 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:21.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:21.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:43:21.111 00:43:21.111 --- 10.0.0.1 ping statistics --- 00:43:21.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:21.111 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:43:21.111 18:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=2998849 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 2998849 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2998849 ']' 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:21.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:21.111 [2024-11-20 18:10:20.115099] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:21.111 [2024-11-20 18:10:20.116234] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:43:21.111 [2024-11-20 18:10:20.116286] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:21.111 [2024-11-20 18:10:20.206282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:21.111 [2024-11-20 18:10:20.253426] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:21.111 [2024-11-20 18:10:20.253479] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:21.111 [2024-11-20 18:10:20.253488] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:21.111 [2024-11-20 18:10:20.253495] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:21.111 [2024-11-20 18:10:20.253502] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:21.111 [2024-11-20 18:10:20.253533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:21.111 [2024-11-20 18:10:20.317553] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:21.111 [2024-11-20 18:10:20.317834] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:21.111 [2024-11-20 18:10:20.974385] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:21.111 18:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:21.111 Malloc0 00:43:21.111 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:21.112 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:21.372 [2024-11-20 18:10:21.050531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2999023 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2999023 /var/tmp/bdevperf.sock 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2999023 ']' 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:21.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:21.372 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:21.372 [2024-11-20 18:10:21.107476] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:43:21.372 [2024-11-20 18:10:21.107540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999023 ] 00:43:21.372 [2024-11-20 18:10:21.188329] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:21.372 [2024-11-20 18:10:21.235331] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:22.315 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:22.315 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:43:22.315 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:43:22.315 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:22.315 18:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:22.315 NVMe0n1 00:43:22.315 18:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:22.315 18:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:43:22.575 Running I/O for 10 seconds... 00:43:24.456 9029.00 IOPS, 35.27 MiB/s [2024-11-20T17:10:25.312Z] 9097.00 IOPS, 35.54 MiB/s [2024-11-20T17:10:26.693Z] 9216.00 IOPS, 36.00 MiB/s [2024-11-20T17:10:27.634Z] 10259.50 IOPS, 40.08 MiB/s [2024-11-20T17:10:28.573Z] 11063.80 IOPS, 43.22 MiB/s [2024-11-20T17:10:29.512Z] 11569.67 IOPS, 45.19 MiB/s [2024-11-20T17:10:30.451Z] 11906.71 IOPS, 46.51 MiB/s [2024-11-20T17:10:31.394Z] 12173.88 IOPS, 47.55 MiB/s [2024-11-20T17:10:32.334Z] 12405.44 IOPS, 48.46 MiB/s [2024-11-20T17:10:32.594Z] 12567.60 IOPS, 49.09 MiB/s 00:43:32.678 Latency(us) 00:43:32.678 [2024-11-20T17:10:32.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:32.678 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:43:32.678 Verification LBA range: start 0x0 length 0x4000 00:43:32.678 NVMe0n1 : 10.05 12598.04 49.21 0.00 0.00 80976.26 16930.13 76021.76 00:43:32.678 [2024-11-20T17:10:32.594Z] =================================================================================================================== 00:43:32.678 [2024-11-20T17:10:32.594Z] Total : 12598.04 49.21 0.00 0.00 80976.26 16930.13 76021.76 00:43:32.678 { 00:43:32.678 "results": [ 00:43:32.678 { 00:43:32.678 "job": "NVMe0n1", 00:43:32.678 "core_mask": "0x1", 00:43:32.678 "workload": "verify", 00:43:32.678 "status": "finished", 00:43:32.678 "verify_range": { 00:43:32.678 "start": 0, 00:43:32.678 "length": 16384 00:43:32.678 }, 00:43:32.678 "queue_depth": 1024, 00:43:32.678 "io_size": 4096, 00:43:32.678 "runtime": 10.053074, 00:43:32.678 "iops": 12598.03717748422, 00:43:32.678 "mibps": 49.21108272454774, 00:43:32.678 "io_failed": 0, 00:43:32.678 "io_timeout": 0, 00:43:32.678 "avg_latency_us": 80976.26336810134, 00:43:32.678 "min_latency_us": 16930.133333333335, 00:43:32.678 "max_latency_us": 76021.76 00:43:32.678 } 00:43:32.678 ], 00:43:32.678 "core_count": 1 00:43:32.678 } 00:43:32.678 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2999023 00:43:32.678 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2999023 ']' 00:43:32.678 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2999023 00:43:32.678 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:43:32.678 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:32.678 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2999023 00:43:32.678 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:32.678 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:32.678 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2999023' 00:43:32.678 killing process with pid 2999023 00:43:32.679 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2999023 00:43:32.679 Received shutdown signal, test time was about 10.000000 seconds 00:43:32.679 00:43:32.679 Latency(us) 00:43:32.679 [2024-11-20T17:10:32.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:32.679 [2024-11-20T17:10:32.595Z] =================================================================================================================== 00:43:32.679 [2024-11-20T17:10:32.595Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:32.679 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2999023 00:43:32.679 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:43:32.679 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:43:32.679 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:43:32.679 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:43:32.679 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:32.679 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:43:32.679 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:32.679 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:32.679 rmmod nvme_tcp 00:43:32.939 rmmod nvme_fabrics 00:43:32.939 rmmod nvme_keyring 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 2998849 ']' 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 2998849 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2998849 ']' 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2998849 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2998849 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2998849' 00:43:32.939 killing process with pid 2998849 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2998849 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2998849 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:32.939 18:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:35.480 18:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:35.480 00:43:35.480 real 0m22.338s 00:43:35.480 user 0m24.885s 00:43:35.480 sys 0m7.194s 00:43:35.480 18:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:35.480 18:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:43:35.480 ************************************ 00:43:35.480 END TEST nvmf_queue_depth 00:43:35.480 ************************************ 00:43:35.480 18:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:43:35.480 18:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:35.480 18:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:35.480 18:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:35.480 ************************************ 00:43:35.480 START TEST nvmf_target_multipath 00:43:35.480 ************************************ 00:43:35.480 18:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:43:35.480 * Looking for test storage... 00:43:35.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:35.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:35.480 --rc genhtml_branch_coverage=1 00:43:35.480 --rc genhtml_function_coverage=1 00:43:35.480 --rc genhtml_legend=1 00:43:35.480 --rc geninfo_all_blocks=1 00:43:35.480 --rc geninfo_unexecuted_blocks=1 00:43:35.480 00:43:35.480 ' 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:35.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:35.480 --rc genhtml_branch_coverage=1 00:43:35.480 --rc genhtml_function_coverage=1 00:43:35.480 --rc genhtml_legend=1 00:43:35.480 --rc geninfo_all_blocks=1 00:43:35.480 --rc geninfo_unexecuted_blocks=1 00:43:35.480 00:43:35.480 ' 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:35.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:35.480 --rc genhtml_branch_coverage=1 00:43:35.480 --rc genhtml_function_coverage=1 00:43:35.480 --rc genhtml_legend=1 00:43:35.480 --rc geninfo_all_blocks=1 00:43:35.480 --rc geninfo_unexecuted_blocks=1 00:43:35.480 00:43:35.480 ' 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:35.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:35.480 --rc genhtml_branch_coverage=1 00:43:35.480 --rc genhtml_function_coverage=1 00:43:35.480 --rc genhtml_legend=1 00:43:35.480 --rc geninfo_all_blocks=1 00:43:35.480 --rc geninfo_unexecuted_blocks=1 00:43:35.480 00:43:35.480 ' 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:35.480 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:43:35.481 18:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:43.610 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:43.610 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:43.610 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:43.610 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:43.611 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:43.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:43.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:43:43.611 00:43:43.611 --- 10.0.0.2 ping statistics --- 00:43:43.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:43.611 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:43.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:43.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:43:43.611 00:43:43.611 --- 10.0.0.1 ping statistics --- 00:43:43.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:43.611 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:43:43.611 only one NIC for nvmf test 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:43.611 rmmod nvme_tcp 00:43:43.611 rmmod nvme_fabrics 00:43:43.611 rmmod nvme_keyring 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:43.611 18:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:44.994 00:43:44.994 real 0m9.671s 00:43:44.994 user 0m2.110s 00:43:44.994 sys 0m5.513s 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:43:44.994 ************************************ 00:43:44.994 END TEST nvmf_target_multipath 00:43:44.994 ************************************ 00:43:44.994 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:44.995 ************************************ 00:43:44.995 START TEST nvmf_zcopy 00:43:44.995 ************************************ 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:43:44.995 * Looking for test storage... 00:43:44.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:44.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.995 --rc genhtml_branch_coverage=1 00:43:44.995 --rc genhtml_function_coverage=1 00:43:44.995 --rc genhtml_legend=1 00:43:44.995 --rc geninfo_all_blocks=1 00:43:44.995 --rc geninfo_unexecuted_blocks=1 00:43:44.995 00:43:44.995 ' 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:44.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.995 --rc genhtml_branch_coverage=1 00:43:44.995 --rc genhtml_function_coverage=1 00:43:44.995 --rc genhtml_legend=1 00:43:44.995 --rc geninfo_all_blocks=1 00:43:44.995 --rc geninfo_unexecuted_blocks=1 00:43:44.995 00:43:44.995 ' 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:44.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.995 --rc genhtml_branch_coverage=1 00:43:44.995 --rc genhtml_function_coverage=1 00:43:44.995 --rc genhtml_legend=1 00:43:44.995 --rc geninfo_all_blocks=1 00:43:44.995 --rc geninfo_unexecuted_blocks=1 00:43:44.995 00:43:44.995 ' 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:44.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.995 --rc genhtml_branch_coverage=1 00:43:44.995 --rc genhtml_function_coverage=1 00:43:44.995 --rc genhtml_legend=1 00:43:44.995 --rc geninfo_all_blocks=1 00:43:44.995 --rc geninfo_unexecuted_blocks=1 00:43:44.995 00:43:44.995 ' 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:43:44.995 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:43:45.256 18:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:43:53.391 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:53.392 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:53.392 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:53.392 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:53.392 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:53.392 18:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:53.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:53.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:43:53.392 00:43:53.392 --- 10.0.0.2 ping statistics --- 00:43:53.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:53.392 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:53.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:53.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:43:53.392 00:43:53.392 --- 10.0.0.1 ping statistics --- 00:43:53.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:53.392 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=3009269 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 3009269 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3009269 ']' 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:53.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:53.392 18:10:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:53.392 [2024-11-20 18:10:52.252716] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:53.392 [2024-11-20 18:10:52.253682] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:43:53.392 [2024-11-20 18:10:52.253719] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:53.392 [2024-11-20 18:10:52.333795] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:53.392 [2024-11-20 18:10:52.364227] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:53.393 [2024-11-20 18:10:52.364267] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:53.393 [2024-11-20 18:10:52.364274] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:53.393 [2024-11-20 18:10:52.364281] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:53.393 [2024-11-20 18:10:52.364287] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:53.393 [2024-11-20 18:10:52.364305] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:53.393 [2024-11-20 18:10:52.411968] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:53.393 [2024-11-20 18:10:52.412228] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:53.393 [2024-11-20 18:10:53.089029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:53.393 [2024-11-20 18:10:53.117296] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:53.393 malloc0 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:53.393 { 00:43:53.393 "params": { 00:43:53.393 "name": "Nvme$subsystem", 00:43:53.393 "trtype": "$TEST_TRANSPORT", 00:43:53.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:53.393 "adrfam": "ipv4", 00:43:53.393 "trsvcid": "$NVMF_PORT", 00:43:53.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:53.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:53.393 "hdgst": ${hdgst:-false}, 00:43:53.393 "ddgst": ${ddgst:-false} 00:43:53.393 }, 00:43:53.393 "method": "bdev_nvme_attach_controller" 00:43:53.393 } 00:43:53.393 EOF 00:43:53.393 )") 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:43:53.393 18:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:43:53.393 "params": { 00:43:53.393 "name": "Nvme1", 00:43:53.393 "trtype": "tcp", 00:43:53.393 "traddr": "10.0.0.2", 00:43:53.393 "adrfam": "ipv4", 00:43:53.393 "trsvcid": "4420", 00:43:53.393 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:53.393 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:53.393 "hdgst": false, 00:43:53.393 "ddgst": false 00:43:53.393 }, 00:43:53.393 "method": "bdev_nvme_attach_controller" 00:43:53.393 }' 00:43:53.393 [2024-11-20 18:10:53.239714] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:43:53.393 [2024-11-20 18:10:53.239770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3009519 ] 00:43:53.653 [2024-11-20 18:10:53.315784] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:53.653 [2024-11-20 18:10:53.347936] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:53.913 Running I/O for 10 seconds... 00:43:55.791 6362.00 IOPS, 49.70 MiB/s [2024-11-20T17:10:56.646Z] 6483.50 IOPS, 50.65 MiB/s [2024-11-20T17:10:58.032Z] 6512.67 IOPS, 50.88 MiB/s [2024-11-20T17:10:58.970Z] 6513.75 IOPS, 50.89 MiB/s [2024-11-20T17:10:59.908Z] 6534.60 IOPS, 51.05 MiB/s [2024-11-20T17:11:00.846Z] 6537.50 IOPS, 51.07 MiB/s [2024-11-20T17:11:01.785Z] 6544.43 IOPS, 51.13 MiB/s [2024-11-20T17:11:02.724Z] 6774.00 IOPS, 52.92 MiB/s [2024-11-20T17:11:03.664Z] 7083.33 IOPS, 55.34 MiB/s [2024-11-20T17:11:03.664Z] 7331.60 IOPS, 57.28 MiB/s 00:44:03.748 Latency(us) 00:44:03.748 [2024-11-20T17:11:03.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:03.748 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:44:03.748 Verification LBA range: start 0x0 length 0x1000 00:44:03.748 Nvme1n1 : 10.01 7333.18 57.29 0.00 0.00 17408.14 730.45 29054.29 00:44:03.748 [2024-11-20T17:11:03.664Z] =================================================================================================================== 00:44:03.748 [2024-11-20T17:11:03.664Z] Total : 7333.18 57.29 0.00 0.00 17408.14 730.45 29054.29 00:44:04.009 18:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3011334 00:44:04.009 18:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:44:04.009 18:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:04.009 18:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:44:04.009 18:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:44:04.009 18:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:44:04.009 18:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:44:04.009 18:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:04.009 18:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:04.009 { 00:44:04.009 "params": { 00:44:04.009 "name": "Nvme$subsystem", 00:44:04.009 "trtype": "$TEST_TRANSPORT", 00:44:04.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:04.009 "adrfam": "ipv4", 00:44:04.009 "trsvcid": "$NVMF_PORT", 00:44:04.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:04.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:04.009 "hdgst": ${hdgst:-false}, 00:44:04.009 "ddgst": ${ddgst:-false} 00:44:04.009 }, 00:44:04.009 "method": "bdev_nvme_attach_controller" 00:44:04.009 } 00:44:04.009 EOF 00:44:04.009 )") 00:44:04.009 18:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:44:04.009 [2024-11-20 18:11:03.756627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.009 [2024-11-20 18:11:03.756655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.009 18:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:44:04.009 18:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:44:04.009 18:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:44:04.009 "params": { 00:44:04.009 "name": "Nvme1", 00:44:04.009 "trtype": "tcp", 00:44:04.009 "traddr": "10.0.0.2", 00:44:04.009 "adrfam": "ipv4", 00:44:04.009 "trsvcid": "4420", 00:44:04.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:04.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:04.009 "hdgst": false, 00:44:04.009 "ddgst": false 00:44:04.009 }, 00:44:04.009 "method": "bdev_nvme_attach_controller" 00:44:04.009 }' 00:44:04.009 [2024-11-20 18:11:03.768592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.009 [2024-11-20 18:11:03.768601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.009 [2024-11-20 18:11:03.780590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.009 [2024-11-20 18:11:03.780598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.009 [2024-11-20 18:11:03.792590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.009 [2024-11-20 18:11:03.792598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.009 [2024-11-20 18:11:03.804590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.009 [2024-11-20 18:11:03.804599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.009 [2024-11-20 18:11:03.807278] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:44:04.009 [2024-11-20 18:11:03.807327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3011334 ] 00:44:04.009 [2024-11-20 18:11:03.816590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.009 [2024-11-20 18:11:03.816599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.009 [2024-11-20 18:11:03.828590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.009 [2024-11-20 18:11:03.828597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.009 [2024-11-20 18:11:03.840590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.009 [2024-11-20 18:11:03.840599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.009 [2024-11-20 18:11:03.852589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.009 [2024-11-20 18:11:03.852598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.009 [2024-11-20 18:11:03.864589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.009 [2024-11-20 18:11:03.864598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.009 [2024-11-20 18:11:03.876590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.009 [2024-11-20 18:11:03.876599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.009 [2024-11-20 18:11:03.881077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:04.009 [2024-11-20 18:11:03.888591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.009 [2024-11-20 18:11:03.888601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.009 [2024-11-20 18:11:03.900592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.009 [2024-11-20 18:11:03.900607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.009 [2024-11-20 18:11:03.909022] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:04.009 [2024-11-20 18:11:03.912590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.009 [2024-11-20 18:11:03.912600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:03.924598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:03.924610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:03.936596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:03.936608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:03.948592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:03.948602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:03.960591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:03.960600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:03.972598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:03.972614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:03.984592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:03.984602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:03.996625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:03.996636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:04.008592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:04.008602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:04.020590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:04.020603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:04.032589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:04.032598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:04.044591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:04.044601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:04.056590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:04.056601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:04.068590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:04.068598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:04.080590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:04.080597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:04.092590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:04.092599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:04.104590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:04.104599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:04.116590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:04.116598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.269 [2024-11-20 18:11:04.128589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.269 [2024-11-20 18:11:04.128597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.270 [2024-11-20 18:11:04.140589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.270 [2024-11-20 18:11:04.140599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.188662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.188677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 Running I/O for 5 seconds... 00:44:04.530 [2024-11-20 18:11:04.200593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.200609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.215585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.215601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.228613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.228630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.241718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.241735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.256311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.256328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.269367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.269383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.283820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.283836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.297128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.297152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.311867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.311883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.324455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.324471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.337314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.337329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.351956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.351971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.364286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.364301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.376828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.376842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.392269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.392284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.405314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.405329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.420193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.420209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.530 [2024-11-20 18:11:04.432775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.530 [2024-11-20 18:11:04.432790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.790 [2024-11-20 18:11:04.447589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.790 [2024-11-20 18:11:04.447604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.790 [2024-11-20 18:11:04.460778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.790 [2024-11-20 18:11:04.460792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.790 [2024-11-20 18:11:04.475805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.790 [2024-11-20 18:11:04.475821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.790 [2024-11-20 18:11:04.488323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.790 [2024-11-20 18:11:04.488338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.790 [2024-11-20 18:11:04.501120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.790 [2024-11-20 18:11:04.501134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.790 [2024-11-20 18:11:04.515806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.790 [2024-11-20 18:11:04.515821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.790 [2024-11-20 18:11:04.528684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.790 [2024-11-20 18:11:04.528699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.790 [2024-11-20 18:11:04.540432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.790 [2024-11-20 18:11:04.540447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.790 [2024-11-20 18:11:04.553606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.790 [2024-11-20 18:11:04.553622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.790 [2024-11-20 18:11:04.568006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.790 [2024-11-20 18:11:04.568022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.790 [2024-11-20 18:11:04.580517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.790 [2024-11-20 18:11:04.580533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.790 [2024-11-20 18:11:04.593196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.790 [2024-11-20 18:11:04.593211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.790 [2024-11-20 18:11:04.608387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.790 [2024-11-20 18:11:04.608403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.790 [2024-11-20 18:11:04.621053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.790 [2024-11-20 18:11:04.621068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.790 [2024-11-20 18:11:04.635893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.790 [2024-11-20 18:11:04.635908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.790 [2024-11-20 18:11:04.648967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.790 [2024-11-20 18:11:04.648982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.790 [2024-11-20 18:11:04.663839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.791 [2024-11-20 18:11:04.663854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.791 [2024-11-20 18:11:04.676491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.791 [2024-11-20 18:11:04.676507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.791 [2024-11-20 18:11:04.688393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.791 [2024-11-20 18:11:04.688408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:04.791 [2024-11-20 18:11:04.700765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:04.791 [2024-11-20 18:11:04.700780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.715583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.715599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.728726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.728741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.740474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.740489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.752916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.752930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.767616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.767632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.780580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.780595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.793220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.793236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.807873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.807889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.821102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.821118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.835996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.836011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.848427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.848442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.861145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.861164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.875849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.875864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.888780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.888795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.901070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.901085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.915876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.915891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.928812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.928826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.943661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.943676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.051 [2024-11-20 18:11:04.956339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.051 [2024-11-20 18:11:04.956355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:04.969356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:04.969371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:04.983976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:04.983991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:04.996951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:04.996966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:05.011825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:05.011840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:05.024672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:05.024687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:05.037319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:05.037334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:05.051838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:05.051853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:05.065356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:05.065371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:05.080458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:05.080474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:05.092766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:05.092781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:05.107903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:05.107918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:05.120837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:05.120852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:05.135958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:05.135973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:05.148664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:05.148679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:05.160725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:05.160740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:05.173348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:05.173363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:05.187995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:05.188010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 [2024-11-20 18:11:05.200942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:05.200957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.311 18850.00 IOPS, 147.27 MiB/s [2024-11-20T17:11:05.227Z] [2024-11-20 18:11:05.215812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.311 [2024-11-20 18:11:05.215827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.571 [2024-11-20 18:11:05.228666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.571 [2024-11-20 18:11:05.228681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.571 [2024-11-20 18:11:05.241208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.571 [2024-11-20 18:11:05.241222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.571 [2024-11-20 18:11:05.256127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.572 [2024-11-20 18:11:05.256142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.572 [2024-11-20 18:11:05.269053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.572 [2024-11-20 18:11:05.269068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.572 [2024-11-20 18:11:05.283585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.572 [2024-11-20 18:11:05.283601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.572 [2024-11-20 18:11:05.296720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.572 [2024-11-20 18:11:05.296735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.572 [2024-11-20 18:11:05.308543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.572 [2024-11-20 18:11:05.308562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.572 [2024-11-20 18:11:05.321075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.572 [2024-11-20 18:11:05.321089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.572 [2024-11-20 18:11:05.335880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.572 [2024-11-20 18:11:05.335895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.572 [2024-11-20 18:11:05.348550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.572 [2024-11-20 18:11:05.348566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.572 [2024-11-20 18:11:05.361069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.572 [2024-11-20 18:11:05.361084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.572 [2024-11-20 18:11:05.376137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.572 [2024-11-20 18:11:05.376152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.572 [2024-11-20 18:11:05.388892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.572 [2024-11-20 18:11:05.388908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.572 [2024-11-20 18:11:05.403943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.572 [2024-11-20 18:11:05.403960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.572 [2024-11-20 18:11:05.416929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.572 [2024-11-20 18:11:05.416944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.572 [2024-11-20 18:11:05.431580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.572 [2024-11-20 18:11:05.431596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.572 [2024-11-20 18:11:05.444534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.572 [2024-11-20 18:11:05.444551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.572 [2024-11-20 18:11:05.457025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.572 [2024-11-20 18:11:05.457039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.572 [2024-11-20 18:11:05.472188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.572 [2024-11-20 18:11:05.472203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.484749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.484765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.496996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.497011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.511822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.511837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.524723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.524738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.539614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.539630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.552616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.552632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.564989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.565007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.579771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.579787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.592625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.592641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.604820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.604835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.619309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.619325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.632410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.632425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.644959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.644975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.659031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.659047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.672244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.672259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.684743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.684758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.697170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.697186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.712078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.712094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.724494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.724509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:05.832 [2024-11-20 18:11:05.736632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:05.832 [2024-11-20 18:11:05.736648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.092 [2024-11-20 18:11:05.748877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.092 [2024-11-20 18:11:05.748892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.092 [2024-11-20 18:11:05.763445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.092 [2024-11-20 18:11:05.763461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.092 [2024-11-20 18:11:05.776766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.092 [2024-11-20 18:11:05.776780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.092 [2024-11-20 18:11:05.791698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.092 [2024-11-20 18:11:05.791714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.092 [2024-11-20 18:11:05.804618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.092 [2024-11-20 18:11:05.804633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.092 [2024-11-20 18:11:05.816293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.093 [2024-11-20 18:11:05.816312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.093 [2024-11-20 18:11:05.829447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.093 [2024-11-20 18:11:05.829462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.093 [2024-11-20 18:11:05.843976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.093 [2024-11-20 18:11:05.843992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.093 [2024-11-20 18:11:05.856870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.093 [2024-11-20 18:11:05.856885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.093 [2024-11-20 18:11:05.871824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.093 [2024-11-20 18:11:05.871839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.093 [2024-11-20 18:11:05.884777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.093 [2024-11-20 18:11:05.884792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.093 [2024-11-20 18:11:05.899791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.093 [2024-11-20 18:11:05.899806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.093 [2024-11-20 18:11:05.912564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.093 [2024-11-20 18:11:05.912579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.093 [2024-11-20 18:11:05.925010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.093 [2024-11-20 18:11:05.925025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.093 [2024-11-20 18:11:05.939837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.093 [2024-11-20 18:11:05.939852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.093 [2024-11-20 18:11:05.952661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.093 [2024-11-20 18:11:05.952677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.093 [2024-11-20 18:11:05.965326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.093 [2024-11-20 18:11:05.965341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.093 [2024-11-20 18:11:05.980185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.093 [2024-11-20 18:11:05.980200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.093 [2024-11-20 18:11:05.992655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.093 [2024-11-20 18:11:05.992671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.093 [2024-11-20 18:11:06.005007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.093 [2024-11-20 18:11:06.005022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.019801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.019817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.032755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.032770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.047952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.047968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.061094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.061109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.076431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.076450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.088903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.088918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.103552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.103567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.116340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.116355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.129148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.129168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.144138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.144154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.156780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.156794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.171787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.171802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.184448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.184463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.196412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.196426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 18937.50 IOPS, 147.95 MiB/s [2024-11-20T17:11:06.269Z] [2024-11-20 18:11:06.209348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.209362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.224024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.224038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.236482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.236498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.248515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.248530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.353 [2024-11-20 18:11:06.261176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.353 [2024-11-20 18:11:06.261190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.613 [2024-11-20 18:11:06.275982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.613 [2024-11-20 18:11:06.275998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.613 [2024-11-20 18:11:06.288891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.613 [2024-11-20 18:11:06.288906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.613 [2024-11-20 18:11:06.303660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.613 [2024-11-20 18:11:06.303675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.613 [2024-11-20 18:11:06.316540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.613 [2024-11-20 18:11:06.316554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.613 [2024-11-20 18:11:06.329211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.613 [2024-11-20 18:11:06.329226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.613 [2024-11-20 18:11:06.343834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.613 [2024-11-20 18:11:06.343850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.613 [2024-11-20 18:11:06.357299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.613 [2024-11-20 18:11:06.357314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.613 [2024-11-20 18:11:06.372127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.613 [2024-11-20 18:11:06.372142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.613 [2024-11-20 18:11:06.385394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.613 [2024-11-20 18:11:06.385409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.613 [2024-11-20 18:11:06.399936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.613 [2024-11-20 18:11:06.399950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.614 [2024-11-20 18:11:06.412702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.614 [2024-11-20 18:11:06.412717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.614 [2024-11-20 18:11:06.424649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.614 [2024-11-20 18:11:06.424664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.614 [2024-11-20 18:11:06.437550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.614 [2024-11-20 18:11:06.437565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.614 [2024-11-20 18:11:06.452130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.614 [2024-11-20 18:11:06.452145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.614 [2024-11-20 18:11:06.464783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.614 [2024-11-20 18:11:06.464797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.614 [2024-11-20 18:11:06.480084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.614 [2024-11-20 18:11:06.480099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.614 [2024-11-20 18:11:06.493295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.614 [2024-11-20 18:11:06.493309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.614 [2024-11-20 18:11:06.507565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.614 [2024-11-20 18:11:06.507581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.614 [2024-11-20 18:11:06.520307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.614 [2024-11-20 18:11:06.520322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.532973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.532988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.547943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.547958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.560775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.560790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.573227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.573242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.587809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.587823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.600880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.600895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.616027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.616042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.628651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.628666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.640813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.640827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.656188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.656203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.668922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.668936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.683784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.683799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.696667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.696682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.709319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.709334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.723424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.723439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.736513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.736528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.748840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.748855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.761346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.761360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:06.875 [2024-11-20 18:11:06.775756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:06.875 [2024-11-20 18:11:06.775771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.135 [2024-11-20 18:11:06.788657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.135 [2024-11-20 18:11:06.788673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.135 [2024-11-20 18:11:06.800387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.135 [2024-11-20 18:11:06.800402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.135 [2024-11-20 18:11:06.812877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.135 [2024-11-20 18:11:06.812891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.135 [2024-11-20 18:11:06.827504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.135 [2024-11-20 18:11:06.827524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.135 [2024-11-20 18:11:06.840574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.135 [2024-11-20 18:11:06.840589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.136 [2024-11-20 18:11:06.852969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.136 [2024-11-20 18:11:06.852984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.136 [2024-11-20 18:11:06.867692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.136 [2024-11-20 18:11:06.867707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.136 [2024-11-20 18:11:06.880505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.136 [2024-11-20 18:11:06.880520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.136 [2024-11-20 18:11:06.893002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.136 [2024-11-20 18:11:06.893017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.136 [2024-11-20 18:11:06.907972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.136 [2024-11-20 18:11:06.907987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.136 [2024-11-20 18:11:06.920780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.136 [2024-11-20 18:11:06.920796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.136 [2024-11-20 18:11:06.933104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.136 [2024-11-20 18:11:06.933119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.136 [2024-11-20 18:11:06.948165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.136 [2024-11-20 18:11:06.948181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.136 [2024-11-20 18:11:06.960944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.136 [2024-11-20 18:11:06.960958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.136 [2024-11-20 18:11:06.975953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.136 [2024-11-20 18:11:06.975968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.136 [2024-11-20 18:11:06.988381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.136 [2024-11-20 18:11:06.988396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.136 [2024-11-20 18:11:07.000999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.136 [2024-11-20 18:11:07.001013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.136 [2024-11-20 18:11:07.015774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.136 [2024-11-20 18:11:07.015789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.136 [2024-11-20 18:11:07.028757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.136 [2024-11-20 18:11:07.028772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.136 [2024-11-20 18:11:07.040787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.136 [2024-11-20 18:11:07.040801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.055325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.055341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.068252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.068267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.080652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.080671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.093293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.093308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.107698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.107713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.120824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.120840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.133054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.133069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.147829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.147844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.160689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.160704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.173084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.173099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.187284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.187299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.200142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.200162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 18961.00 IOPS, 148.13 MiB/s [2024-11-20T17:11:07.313Z] [2024-11-20 18:11:07.212521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.212536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.225165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.225180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.239860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.239876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.252331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.252346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.265153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.265172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.280321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.280337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.293060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.293075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.397 [2024-11-20 18:11:07.308275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.397 [2024-11-20 18:11:07.308291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.320995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.321010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.335554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.335574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.348287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.348303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.360849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.360865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.372930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.372945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.387681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.387697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.400770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.400785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.412409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.412424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.425383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.425398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.440407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.440423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.453185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.453200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.467841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.467857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.480927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.480942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.495884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.495899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.509067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.509082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.523824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.523840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.536805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.536821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.549360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.549375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.659 [2024-11-20 18:11:07.563562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.659 [2024-11-20 18:11:07.563577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.576066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.576081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.588316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.588336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.601045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.601060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.615570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.615586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.628591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.628607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.641357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.641373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.655751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.655768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.668404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.668419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.681138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.681153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.695451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.695466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.708109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.708125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.720717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.720732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.732330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.732345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.744572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.744588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.757319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.757333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.771370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.771385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.784173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.784189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.796775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.796789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.811420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.811435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.824182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.824197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.836716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.836732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.849122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.849136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:07.960 [2024-11-20 18:11:07.863566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:07.960 [2024-11-20 18:11:07.863581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:07.876550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:07.876565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:07.888803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:07.888818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:07.900679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:07.900694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:07.913081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:07.913095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:07.927461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:07.927477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:07.940375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:07.940390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:07.953012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:07.953027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:07.967770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:07.967785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:07.980635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:07.980650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:07.993238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:07.993252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:08.008060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:08.008074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:08.020983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:08.020997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:08.036028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:08.036043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:08.048922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:08.048936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:08.064052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:08.064067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:08.077058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:08.077072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:08.091536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:08.091550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:08.104107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:08.104122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:08.117016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:08.117031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:08.131913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:08.131928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:08.144690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:08.144705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:08.156417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:08.156432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.280 [2024-11-20 18:11:08.169449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.280 [2024-11-20 18:11:08.169464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.184046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.184062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.196834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.196848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 18960.75 IOPS, 148.13 MiB/s [2024-11-20T17:11:08.488Z] [2024-11-20 18:11:08.211566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.211581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.224712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.224727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.237365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.237379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.251542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.251557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.263994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.264009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.277099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.277113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.292021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.292036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.305039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.305053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.319364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.319379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.332535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.332555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.344813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.344828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.359299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.359314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.372286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.372301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.384812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.384827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.396646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.396661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.409625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.409640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.424028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.424043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.436825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.436840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.451643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.451657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.464453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.464468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.572 [2024-11-20 18:11:08.476736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.572 [2024-11-20 18:11:08.476751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.881 [2024-11-20 18:11:08.491869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.881 [2024-11-20 18:11:08.491884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.881 [2024-11-20 18:11:08.504812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.881 [2024-11-20 18:11:08.504827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.881 [2024-11-20 18:11:08.519837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.881 [2024-11-20 18:11:08.519852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.881 [2024-11-20 18:11:08.532371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.881 [2024-11-20 18:11:08.532386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.881 [2024-11-20 18:11:08.544553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.881 [2024-11-20 18:11:08.544568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.881 [2024-11-20 18:11:08.557078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.881 [2024-11-20 18:11:08.557092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.881 [2024-11-20 18:11:08.571834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.881 [2024-11-20 18:11:08.571848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.881 [2024-11-20 18:11:08.584719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.881 [2024-11-20 18:11:08.584739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.881 [2024-11-20 18:11:08.597039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.881 [2024-11-20 18:11:08.597054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.881 [2024-11-20 18:11:08.611677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.881 [2024-11-20 18:11:08.611692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.881 [2024-11-20 18:11:08.624761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.881 [2024-11-20 18:11:08.624776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.882 [2024-11-20 18:11:08.637520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.882 [2024-11-20 18:11:08.637535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.882 [2024-11-20 18:11:08.652245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.882 [2024-11-20 18:11:08.652260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.882 [2024-11-20 18:11:08.665418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.882 [2024-11-20 18:11:08.665434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.882 [2024-11-20 18:11:08.680215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.882 [2024-11-20 18:11:08.680232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.882 [2024-11-20 18:11:08.693178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.882 [2024-11-20 18:11:08.693193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.882 [2024-11-20 18:11:08.707711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.882 [2024-11-20 18:11:08.707726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.882 [2024-11-20 18:11:08.720792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.882 [2024-11-20 18:11:08.720807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.882 [2024-11-20 18:11:08.732864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.882 [2024-11-20 18:11:08.732879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.882 [2024-11-20 18:11:08.747813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.882 [2024-11-20 18:11:08.747828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.882 [2024-11-20 18:11:08.760487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.882 [2024-11-20 18:11:08.760502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.882 [2024-11-20 18:11:08.772884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.882 [2024-11-20 18:11:08.772899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:08.882 [2024-11-20 18:11:08.787809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:08.882 [2024-11-20 18:11:08.787824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.142 [2024-11-20 18:11:08.800417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.142 [2024-11-20 18:11:08.800432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.142 [2024-11-20 18:11:08.812239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.142 [2024-11-20 18:11:08.812255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.142 [2024-11-20 18:11:08.825187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.142 [2024-11-20 18:11:08.825202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.142 [2024-11-20 18:11:08.839988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.142 [2024-11-20 18:11:08.840007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.142 [2024-11-20 18:11:08.852623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.142 [2024-11-20 18:11:08.852638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.142 [2024-11-20 18:11:08.864203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.142 [2024-11-20 18:11:08.864217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.142 [2024-11-20 18:11:08.877278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.142 [2024-11-20 18:11:08.877293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.142 [2024-11-20 18:11:08.892060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.142 [2024-11-20 18:11:08.892075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.142 [2024-11-20 18:11:08.904891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.142 [2024-11-20 18:11:08.904905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.142 [2024-11-20 18:11:08.919386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.142 [2024-11-20 18:11:08.919401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.142 [2024-11-20 18:11:08.932799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.142 [2024-11-20 18:11:08.932813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.142 [2024-11-20 18:11:08.945660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.143 [2024-11-20 18:11:08.945674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.143 [2024-11-20 18:11:08.959541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.143 [2024-11-20 18:11:08.959556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.143 [2024-11-20 18:11:08.972450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.143 [2024-11-20 18:11:08.972464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.143 [2024-11-20 18:11:08.984824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.143 [2024-11-20 18:11:08.984838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.143 [2024-11-20 18:11:08.997461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.143 [2024-11-20 18:11:08.997477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.143 [2024-11-20 18:11:09.012371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.143 [2024-11-20 18:11:09.012387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.143 [2024-11-20 18:11:09.024801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.143 [2024-11-20 18:11:09.024816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.143 [2024-11-20 18:11:09.037327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.143 [2024-11-20 18:11:09.037342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.143 [2024-11-20 18:11:09.051427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.143 [2024-11-20 18:11:09.051443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.064561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.064576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.077337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.077352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.091962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.091982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.104562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.104577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.117336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.117351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.131327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.131342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.143825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.143841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.156490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.156505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.169229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.169244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.184205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.184220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.196651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.196666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.208418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.208433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 18957.60 IOPS, 148.11 MiB/s [2024-11-20T17:11:09.320Z] [2024-11-20 18:11:09.218165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.218180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 00:44:09.404 Latency(us) 00:44:09.404 [2024-11-20T17:11:09.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:09.404 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:44:09.404 Nvme1n1 : 5.01 18959.25 148.12 0.00 0.00 6745.14 2594.13 11359.57 00:44:09.404 [2024-11-20T17:11:09.320Z] =================================================================================================================== 00:44:09.404 [2024-11-20T17:11:09.320Z] Total : 18959.25 148.12 0.00 0.00 6745.14 2594.13 11359.57 00:44:09.404 [2024-11-20 18:11:09.228594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.228607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.240602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.240617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.252597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.252609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.264597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.264609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.276593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.276604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.288591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.288601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.300596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.300609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.404 [2024-11-20 18:11:09.312594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.404 [2024-11-20 18:11:09.312606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.664 [2024-11-20 18:11:09.324590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:09.664 [2024-11-20 18:11:09.324600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:09.664 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3011334) - No such process 00:44:09.664 18:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3011334 00:44:09.664 18:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:09.664 18:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.664 18:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:09.664 18:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.664 18:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:44:09.664 18:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.664 18:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:09.664 delay0 00:44:09.664 18:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.664 18:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:44:09.664 18:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.664 18:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:09.664 18:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.664 18:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:44:09.664 [2024-11-20 18:11:09.432528] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:44:17.796 [2024-11-20 18:11:16.374003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde54e0 is same with the state(6) to be set 00:44:17.796 [2024-11-20 18:11:16.374043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde54e0 is same with the state(6) to be set 00:44:17.796 Initializing NVMe Controllers 00:44:17.796 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:44:17.796 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:44:17.796 Initialization complete. Launching workers. 00:44:17.796 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 7521 00:44:17.796 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7808, failed to submit 33 00:44:17.796 success 7691, unsuccessful 117, failed 0 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:17.796 rmmod nvme_tcp 00:44:17.796 rmmod nvme_fabrics 00:44:17.796 rmmod nvme_keyring 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 3009269 ']' 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 3009269 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3009269 ']' 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3009269 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3009269 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3009269' 00:44:17.796 killing process with pid 3009269 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3009269 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3009269 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:44:17.796 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:44:17.797 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:44:17.797 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:44:17.797 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:17.797 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:17.797 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:17.797 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:17.797 18:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:19.180 00:44:19.180 real 0m34.040s 00:44:19.180 user 0m43.487s 00:44:19.180 sys 0m12.346s 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:19.180 ************************************ 00:44:19.180 END TEST nvmf_zcopy 00:44:19.180 ************************************ 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:19.180 ************************************ 00:44:19.180 START TEST nvmf_nmic 00:44:19.180 ************************************ 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:44:19.180 * Looking for test storage... 00:44:19.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:19.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:19.180 --rc genhtml_branch_coverage=1 00:44:19.180 --rc genhtml_function_coverage=1 00:44:19.180 --rc genhtml_legend=1 00:44:19.180 --rc geninfo_all_blocks=1 00:44:19.180 --rc geninfo_unexecuted_blocks=1 00:44:19.180 00:44:19.180 ' 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:19.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:19.180 --rc genhtml_branch_coverage=1 00:44:19.180 --rc genhtml_function_coverage=1 00:44:19.180 --rc genhtml_legend=1 00:44:19.180 --rc geninfo_all_blocks=1 00:44:19.180 --rc geninfo_unexecuted_blocks=1 00:44:19.180 00:44:19.180 ' 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:19.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:19.180 --rc genhtml_branch_coverage=1 00:44:19.180 --rc genhtml_function_coverage=1 00:44:19.180 --rc genhtml_legend=1 00:44:19.180 --rc geninfo_all_blocks=1 00:44:19.180 --rc geninfo_unexecuted_blocks=1 00:44:19.180 00:44:19.180 ' 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:19.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:19.180 --rc genhtml_branch_coverage=1 00:44:19.180 --rc genhtml_function_coverage=1 00:44:19.180 --rc genhtml_legend=1 00:44:19.180 --rc geninfo_all_blocks=1 00:44:19.180 --rc geninfo_unexecuted_blocks=1 00:44:19.180 00:44:19.180 ' 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:19.180 18:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:19.180 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:19.180 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:19.180 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:19.180 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:19.180 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:19.180 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:19.180 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:44:19.181 18:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:27.331 18:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:44:27.331 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:44:27.331 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:44:27.331 Found net devices under 0000:4b:00.0: cvl_0_0 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:44:27.331 Found net devices under 0000:4b:00.1: cvl_0_1 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:27.331 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:27.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:27.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:44:27.332 00:44:27.332 --- 10.0.0.2 ping statistics --- 00:44:27.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:27.332 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:27.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:27.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:44:27.332 00:44:27.332 --- 10.0.0.1 ping statistics --- 00:44:27.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:27.332 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=3017872 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 3017872 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3017872 ']' 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:27.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:27.332 18:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:27.332 [2024-11-20 18:11:26.420570] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:27.332 [2024-11-20 18:11:26.421695] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:44:27.332 [2024-11-20 18:11:26.421746] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:27.332 [2024-11-20 18:11:26.509891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:27.332 [2024-11-20 18:11:26.559064] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:27.332 [2024-11-20 18:11:26.559118] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:27.332 [2024-11-20 18:11:26.559126] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:27.332 [2024-11-20 18:11:26.559134] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:27.332 [2024-11-20 18:11:26.559140] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:27.332 [2024-11-20 18:11:26.559232] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:44:27.332 [2024-11-20 18:11:26.559457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:44:27.332 [2024-11-20 18:11:26.559597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:27.332 [2024-11-20 18:11:26.559597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:44:27.332 [2024-11-20 18:11:26.631688] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:27.332 [2024-11-20 18:11:26.632974] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:44:27.332 [2024-11-20 18:11:26.633286] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:27.332 [2024-11-20 18:11:26.633855] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:27.332 [2024-11-20 18:11:26.633903] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:27.332 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:27.332 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:44:27.332 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:44:27.332 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:27.332 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:27.593 [2024-11-20 18:11:27.284494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:27.593 Malloc0 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:27.593 [2024-11-20 18:11:27.368829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:44:27.593 test case1: single bdev can't be used in multiple subsystems 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:27.593 [2024-11-20 18:11:27.404085] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:44:27.593 [2024-11-20 18:11:27.404119] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:44:27.593 [2024-11-20 18:11:27.404128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:27.593 request: 00:44:27.593 { 00:44:27.593 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:44:27.593 "namespace": { 00:44:27.593 "bdev_name": "Malloc0", 00:44:27.593 "no_auto_visible": false 00:44:27.593 }, 00:44:27.593 "method": "nvmf_subsystem_add_ns", 00:44:27.593 "req_id": 1 00:44:27.593 } 00:44:27.593 Got JSON-RPC error response 00:44:27.593 response: 00:44:27.593 { 00:44:27.593 "code": -32602, 00:44:27.593 "message": "Invalid parameters" 00:44:27.593 } 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:44:27.593 Adding namespace failed - expected result. 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:44:27.593 test case2: host connect to nvmf target in multiple paths 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:27.593 [2024-11-20 18:11:27.416230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:27.593 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:44:28.165 18:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:44:28.737 18:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:44:28.737 18:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:44:28.737 18:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:44:28.737 18:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:44:28.737 18:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:44:30.648 18:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:44:30.648 18:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:44:30.648 18:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:44:30.648 18:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:44:30.648 18:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:44:30.648 18:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:44:30.648 18:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:44:30.648 [global] 00:44:30.648 thread=1 00:44:30.648 invalidate=1 00:44:30.648 rw=write 00:44:30.648 time_based=1 00:44:30.648 runtime=1 00:44:30.648 ioengine=libaio 00:44:30.648 direct=1 00:44:30.648 bs=4096 00:44:30.648 iodepth=1 00:44:30.648 norandommap=0 00:44:30.648 numjobs=1 00:44:30.648 00:44:30.648 verify_dump=1 00:44:30.648 verify_backlog=512 00:44:30.648 verify_state_save=0 00:44:30.648 do_verify=1 00:44:30.648 verify=crc32c-intel 00:44:30.648 [job0] 00:44:30.648 filename=/dev/nvme0n1 00:44:30.648 Could not set queue depth (nvme0n1) 00:44:30.908 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:30.908 fio-3.35 00:44:30.908 Starting 1 thread 00:44:32.292 00:44:32.292 job0: (groupid=0, jobs=1): err= 0: pid=3018732: Wed Nov 20 18:11:31 2024 00:44:32.292 read: IOPS=537, BW=2150KiB/s (2201kB/s)(2152KiB/1001msec) 00:44:32.292 slat (nsec): min=6957, max=57934, avg=23370.00, stdev=7591.50 00:44:32.292 clat (usec): min=445, max=1171, avg=768.21, stdev=90.38 00:44:32.292 lat (usec): min=453, max=1197, avg=791.58, stdev=92.55 00:44:32.292 clat percentiles (usec): 00:44:32.292 | 1.00th=[ 553], 5.00th=[ 644], 10.00th=[ 668], 20.00th=[ 701], 00:44:32.292 | 30.00th=[ 750], 40.00th=[ 766], 50.00th=[ 766], 60.00th=[ 775], 00:44:32.292 | 70.00th=[ 783], 80.00th=[ 799], 90.00th=[ 832], 95.00th=[ 996], 00:44:32.292 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1172], 99.95th=[ 1172], 00:44:32.292 | 99.99th=[ 1172] 00:44:32.292 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:44:32.292 slat (usec): min=9, max=30763, avg=59.22, stdev=960.51 00:44:32.292 clat (usec): min=163, max=814, avg=491.90, stdev=131.78 00:44:32.292 lat (usec): min=177, max=31116, avg=551.11, stdev=965.75 00:44:32.292 clat percentiles (usec): 00:44:32.292 | 1.00th=[ 223], 5.00th=[ 289], 10.00th=[ 326], 20.00th=[ 371], 00:44:32.292 | 30.00th=[ 420], 40.00th=[ 453], 50.00th=[ 474], 60.00th=[ 515], 00:44:32.292 | 70.00th=[ 578], 80.00th=[ 619], 90.00th=[ 676], 95.00th=[ 701], 00:44:32.292 | 99.00th=[ 750], 99.50th=[ 758], 99.90th=[ 783], 99.95th=[ 816], 00:44:32.292 | 99.99th=[ 816] 00:44:32.292 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:44:32.292 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:32.292 lat (usec) : 250=1.92%, 500=35.85%, 750=37.52%, 1000=23.18% 00:44:32.292 lat (msec) : 2=1.54% 00:44:32.292 cpu : usr=2.30%, sys=4.10%, ctx=1565, majf=0, minf=1 00:44:32.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:32.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:32.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:32.292 issued rwts: total=538,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:32.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:32.292 00:44:32.292 Run status group 0 (all jobs): 00:44:32.292 READ: bw=2150KiB/s (2201kB/s), 2150KiB/s-2150KiB/s (2201kB/s-2201kB/s), io=2152KiB (2204kB), run=1001-1001msec 00:44:32.292 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:44:32.292 00:44:32.292 Disk stats (read/write): 00:44:32.292 nvme0n1: ios=538/906, merge=0/0, ticks=1345/428, in_queue=1773, util=99.00% 00:44:32.292 18:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:44:32.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:32.292 rmmod nvme_tcp 00:44:32.292 rmmod nvme_fabrics 00:44:32.292 rmmod nvme_keyring 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 3017872 ']' 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 3017872 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3017872 ']' 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3017872 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:32.292 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3017872 00:44:32.552 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:32.552 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:32.552 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3017872' 00:44:32.552 killing process with pid 3017872 00:44:32.552 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3017872 00:44:32.552 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3017872 00:44:32.552 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:44:32.552 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:44:32.552 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:44:32.552 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:44:32.552 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:44:32.552 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:44:32.552 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:44:32.552 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:32.552 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:32.552 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:32.552 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:32.552 18:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:35.094 00:44:35.094 real 0m15.677s 00:44:35.094 user 0m36.639s 00:44:35.094 sys 0m7.360s 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:44:35.094 ************************************ 00:44:35.094 END TEST nvmf_nmic 00:44:35.094 ************************************ 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:35.094 ************************************ 00:44:35.094 START TEST nvmf_fio_target 00:44:35.094 ************************************ 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:44:35.094 * Looking for test storage... 00:44:35.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:44:35.094 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:35.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.095 --rc genhtml_branch_coverage=1 00:44:35.095 --rc genhtml_function_coverage=1 00:44:35.095 --rc genhtml_legend=1 00:44:35.095 --rc geninfo_all_blocks=1 00:44:35.095 --rc geninfo_unexecuted_blocks=1 00:44:35.095 00:44:35.095 ' 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:35.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.095 --rc genhtml_branch_coverage=1 00:44:35.095 --rc genhtml_function_coverage=1 00:44:35.095 --rc genhtml_legend=1 00:44:35.095 --rc geninfo_all_blocks=1 00:44:35.095 --rc geninfo_unexecuted_blocks=1 00:44:35.095 00:44:35.095 ' 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:35.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.095 --rc genhtml_branch_coverage=1 00:44:35.095 --rc genhtml_function_coverage=1 00:44:35.095 --rc genhtml_legend=1 00:44:35.095 --rc geninfo_all_blocks=1 00:44:35.095 --rc geninfo_unexecuted_blocks=1 00:44:35.095 00:44:35.095 ' 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:35.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.095 --rc genhtml_branch_coverage=1 00:44:35.095 --rc genhtml_function_coverage=1 00:44:35.095 --rc genhtml_legend=1 00:44:35.095 --rc geninfo_all_blocks=1 00:44:35.095 --rc geninfo_unexecuted_blocks=1 00:44:35.095 00:44:35.095 ' 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:35.095 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:44:35.096 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:44:35.096 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:44:35.096 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:35.096 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:35.096 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:35.096 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:44:35.096 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:44:35.096 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:44:35.096 18:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:44:43.232 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:44:43.232 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:44:43.232 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:44:43.233 Found net devices under 0000:4b:00.0: cvl_0_0 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:44:43.233 Found net devices under 0000:4b:00.1: cvl_0_1 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:43.233 18:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:43.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:43.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:44:43.233 00:44:43.233 --- 10.0.0.2 ping statistics --- 00:44:43.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:43.233 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:43.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:43.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:44:43.233 00:44:43.233 --- 10.0.0.1 ping statistics --- 00:44:43.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:43.233 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=3023042 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 3023042 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3023042 ']' 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:43.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:43.233 [2024-11-20 18:11:42.140734] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:43.233 [2024-11-20 18:11:42.142280] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:44:43.233 [2024-11-20 18:11:42.142339] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:43.233 [2024-11-20 18:11:42.235587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:43.233 [2024-11-20 18:11:42.282859] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:43.233 [2024-11-20 18:11:42.282915] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:43.233 [2024-11-20 18:11:42.282923] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:43.233 [2024-11-20 18:11:42.282930] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:43.233 [2024-11-20 18:11:42.282936] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:43.233 [2024-11-20 18:11:42.283000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:44:43.233 [2024-11-20 18:11:42.283126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:44:43.233 [2024-11-20 18:11:42.283284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:44:43.233 [2024-11-20 18:11:42.283434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:43.233 [2024-11-20 18:11:42.358462] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:43.233 [2024-11-20 18:11:42.359138] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:43.233 [2024-11-20 18:11:42.360156] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:44:43.233 [2024-11-20 18:11:42.360174] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:43.233 [2024-11-20 18:11:42.360295] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:43.233 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:43.234 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:44:43.234 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:44:43.234 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:43.234 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:43.234 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:43.234 18:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:44:43.494 [2024-11-20 18:11:43.160335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:43.494 18:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:43.754 18:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:44:43.754 18:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:43.754 18:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:44:43.754 18:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:44.014 18:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:44:44.014 18:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:44.274 18:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:44:44.274 18:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:44:44.535 18:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:44.535 18:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:44:44.535 18:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:44.795 18:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:44:44.795 18:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:45.055 18:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:44:45.055 18:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:44:45.315 18:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:44:45.315 18:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:44:45.315 18:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:45.575 18:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:44:45.575 18:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:44:45.835 18:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:45.835 [2024-11-20 18:11:45.748322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:46.096 18:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:44:46.096 18:11:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:44:46.356 18:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:44:46.926 18:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:44:46.926 18:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:44:46.926 18:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:44:46.926 18:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:44:46.926 18:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:44:46.926 18:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:44:48.834 18:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:44:48.834 18:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:44:48.834 18:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:44:48.834 18:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:44:48.834 18:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:44:48.834 18:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:44:48.834 18:11:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:44:48.834 [global] 00:44:48.834 thread=1 00:44:48.834 invalidate=1 00:44:48.834 rw=write 00:44:48.834 time_based=1 00:44:48.834 runtime=1 00:44:48.834 ioengine=libaio 00:44:48.834 direct=1 00:44:48.834 bs=4096 00:44:48.834 iodepth=1 00:44:48.834 norandommap=0 00:44:48.834 numjobs=1 00:44:48.834 00:44:48.834 verify_dump=1 00:44:48.834 verify_backlog=512 00:44:48.834 verify_state_save=0 00:44:48.834 do_verify=1 00:44:48.834 verify=crc32c-intel 00:44:48.834 [job0] 00:44:48.834 filename=/dev/nvme0n1 00:44:48.834 [job1] 00:44:48.834 filename=/dev/nvme0n2 00:44:48.834 [job2] 00:44:48.834 filename=/dev/nvme0n3 00:44:48.834 [job3] 00:44:48.834 filename=/dev/nvme0n4 00:44:49.113 Could not set queue depth (nvme0n1) 00:44:49.113 Could not set queue depth (nvme0n2) 00:44:49.113 Could not set queue depth (nvme0n3) 00:44:49.113 Could not set queue depth (nvme0n4) 00:44:49.393 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:49.393 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:49.393 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:49.393 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:49.393 fio-3.35 00:44:49.393 Starting 4 threads 00:44:50.809 00:44:50.809 job0: (groupid=0, jobs=1): err= 0: pid=3024603: Wed Nov 20 18:11:50 2024 00:44:50.809 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:44:50.809 slat (nsec): min=7250, max=61364, avg=25212.28, stdev=5811.30 00:44:50.809 clat (usec): min=450, max=1449, avg=976.38, stdev=155.16 00:44:50.809 lat (usec): min=477, max=1476, avg=1001.59, stdev=157.28 00:44:50.809 clat percentiles (usec): 00:44:50.809 | 1.00th=[ 519], 5.00th=[ 635], 10.00th=[ 775], 20.00th=[ 881], 00:44:50.809 | 30.00th=[ 930], 40.00th=[ 971], 50.00th=[ 1004], 60.00th=[ 1037], 00:44:50.809 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1172], 00:44:50.809 | 99.00th=[ 1237], 99.50th=[ 1287], 99.90th=[ 1450], 99.95th=[ 1450], 00:44:50.809 | 99.99th=[ 1450] 00:44:50.809 write: IOPS=704, BW=2817KiB/s (2885kB/s)(2820KiB/1001msec); 0 zone resets 00:44:50.809 slat (nsec): min=9750, max=79687, avg=30995.28, stdev=9928.61 00:44:50.809 clat (usec): min=279, max=988, avg=645.46, stdev=124.93 00:44:50.809 lat (usec): min=290, max=1023, avg=676.46, stdev=129.21 00:44:50.809 clat percentiles (usec): 00:44:50.809 | 1.00th=[ 334], 5.00th=[ 420], 10.00th=[ 465], 20.00th=[ 553], 00:44:50.809 | 30.00th=[ 594], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 685], 00:44:50.809 | 70.00th=[ 717], 80.00th=[ 750], 90.00th=[ 791], 95.00th=[ 848], 00:44:50.809 | 99.00th=[ 914], 99.50th=[ 971], 99.90th=[ 988], 99.95th=[ 988], 00:44:50.809 | 99.99th=[ 988] 00:44:50.809 bw ( KiB/s): min= 4096, max= 4096, per=46.93%, avg=4096.00, stdev= 0.00, samples=1 00:44:50.809 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:50.809 lat (usec) : 500=8.46%, 750=42.32%, 1000=27.20% 00:44:50.809 lat (msec) : 2=22.02% 00:44:50.809 cpu : usr=1.70%, sys=3.70%, ctx=1219, majf=0, minf=1 00:44:50.809 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:50.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:50.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:50.809 issued rwts: total=512,705,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:50.809 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:50.809 job1: (groupid=0, jobs=1): err= 0: pid=3024604: Wed Nov 20 18:11:50 2024 00:44:50.809 read: IOPS=15, BW=62.3KiB/s (63.8kB/s)(64.0KiB/1027msec) 00:44:50.809 slat (nsec): min=25114, max=29706, avg=25893.19, stdev=1387.45 00:44:50.809 clat (usec): min=40978, max=42114, avg=41809.97, stdev=331.82 00:44:50.809 lat (usec): min=41008, max=42140, avg=41835.86, stdev=331.03 00:44:50.809 clat percentiles (usec): 00:44:50.809 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:44:50.809 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:44:50.809 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:44:50.809 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:50.809 | 99.99th=[42206] 00:44:50.809 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:44:50.809 slat (nsec): min=9668, max=86034, avg=29087.80, stdev=9032.31 00:44:50.809 clat (usec): min=361, max=951, avg=662.37, stdev=111.91 00:44:50.809 lat (usec): min=379, max=984, avg=691.46, stdev=115.54 00:44:50.809 clat percentiles (usec): 00:44:50.809 | 1.00th=[ 379], 5.00th=[ 469], 10.00th=[ 502], 20.00th=[ 586], 00:44:50.809 | 30.00th=[ 611], 40.00th=[ 627], 50.00th=[ 668], 60.00th=[ 709], 00:44:50.809 | 70.00th=[ 734], 80.00th=[ 758], 90.00th=[ 791], 95.00th=[ 832], 00:44:50.809 | 99.00th=[ 873], 99.50th=[ 914], 99.90th=[ 955], 99.95th=[ 955], 00:44:50.809 | 99.99th=[ 955] 00:44:50.809 bw ( KiB/s): min= 4096, max= 4096, per=46.93%, avg=4096.00, stdev= 0.00, samples=1 00:44:50.809 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:50.809 lat (usec) : 500=9.66%, 750=65.15%, 1000=22.16% 00:44:50.809 lat (msec) : 50=3.03% 00:44:50.809 cpu : usr=0.49%, sys=1.66%, ctx=528, majf=0, minf=2 00:44:50.809 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:50.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:50.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:50.809 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:50.809 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:50.809 job2: (groupid=0, jobs=1): err= 0: pid=3024605: Wed Nov 20 18:11:50 2024 00:44:50.809 read: IOPS=15, BW=63.6KiB/s (65.1kB/s)(64.0KiB/1006msec) 00:44:50.809 slat (nsec): min=27061, max=28372, avg=27565.13, stdev=385.68 00:44:50.809 clat (usec): min=40863, max=42095, avg=41750.66, stdev=432.92 00:44:50.809 lat (usec): min=40891, max=42123, avg=41778.23, stdev=432.62 00:44:50.809 clat percentiles (usec): 00:44:50.809 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41681], 00:44:50.809 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:44:50.809 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:44:50.809 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:50.809 | 99.99th=[42206] 00:44:50.809 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:44:50.810 slat (nsec): min=9577, max=69069, avg=31775.12, stdev=9955.39 00:44:50.810 clat (usec): min=227, max=955, avg=617.80, stdev=116.89 00:44:50.810 lat (usec): min=238, max=991, avg=649.57, stdev=121.73 00:44:50.810 clat percentiles (usec): 00:44:50.810 | 1.00th=[ 363], 5.00th=[ 408], 10.00th=[ 465], 20.00th=[ 519], 00:44:50.810 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:44:50.810 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 799], 00:44:50.810 | 99.00th=[ 881], 99.50th=[ 922], 99.90th=[ 955], 99.95th=[ 955], 00:44:50.810 | 99.99th=[ 955] 00:44:50.810 bw ( KiB/s): min= 4096, max= 4096, per=46.93%, avg=4096.00, stdev= 0.00, samples=1 00:44:50.810 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:50.810 lat (usec) : 250=0.19%, 500=16.86%, 750=68.56%, 1000=11.36% 00:44:50.810 lat (msec) : 50=3.03% 00:44:50.810 cpu : usr=1.09%, sys=1.99%, ctx=529, majf=0, minf=1 00:44:50.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:50.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:50.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:50.810 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:50.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:50.810 job3: (groupid=0, jobs=1): err= 0: pid=3024606: Wed Nov 20 18:11:50 2024 00:44:50.810 read: IOPS=157, BW=630KiB/s (645kB/s)(644KiB/1022msec) 00:44:50.810 slat (nsec): min=6916, max=44229, avg=22744.49, stdev=8009.67 00:44:50.810 clat (usec): min=511, max=42024, avg=3918.63, stdev=10764.67 00:44:50.810 lat (usec): min=521, max=42052, avg=3941.38, stdev=10766.46 00:44:50.810 clat percentiles (usec): 00:44:50.810 | 1.00th=[ 529], 5.00th=[ 619], 10.00th=[ 676], 20.00th=[ 758], 00:44:50.810 | 30.00th=[ 824], 40.00th=[ 873], 50.00th=[ 922], 60.00th=[ 955], 00:44:50.810 | 70.00th=[ 971], 80.00th=[ 996], 90.00th=[ 1074], 95.00th=[41681], 00:44:50.810 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:50.810 | 99.99th=[42206] 00:44:50.810 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:44:50.810 slat (nsec): min=10354, max=61728, avg=34620.55, stdev=9168.38 00:44:50.810 clat (usec): min=264, max=1116, avg=709.44, stdev=145.69 00:44:50.810 lat (usec): min=299, max=1152, avg=744.06, stdev=147.86 00:44:50.810 clat percentiles (usec): 00:44:50.810 | 1.00th=[ 367], 5.00th=[ 478], 10.00th=[ 510], 20.00th=[ 586], 00:44:50.810 | 30.00th=[ 627], 40.00th=[ 676], 50.00th=[ 717], 60.00th=[ 750], 00:44:50.810 | 70.00th=[ 783], 80.00th=[ 832], 90.00th=[ 898], 95.00th=[ 955], 00:44:50.810 | 99.00th=[ 1029], 99.50th=[ 1057], 99.90th=[ 1123], 99.95th=[ 1123], 00:44:50.810 | 99.99th=[ 1123] 00:44:50.810 bw ( KiB/s): min= 4096, max= 4096, per=46.93%, avg=4096.00, stdev= 0.00, samples=1 00:44:50.810 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:50.810 lat (usec) : 500=6.54%, 750=43.68%, 1000=43.68% 00:44:50.810 lat (msec) : 2=4.31%, 50=1.78% 00:44:50.810 cpu : usr=0.69%, sys=2.35%, ctx=675, majf=0, minf=1 00:44:50.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:50.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:50.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:50.810 issued rwts: total=161,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:50.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:50.810 00:44:50.810 Run status group 0 (all jobs): 00:44:50.810 READ: bw=2746KiB/s (2812kB/s), 62.3KiB/s-2046KiB/s (63.8kB/s-2095kB/s), io=2820KiB (2888kB), run=1001-1027msec 00:44:50.810 WRITE: bw=8728KiB/s (8938kB/s), 1994KiB/s-2817KiB/s (2042kB/s-2885kB/s), io=8964KiB (9179kB), run=1001-1027msec 00:44:50.810 00:44:50.810 Disk stats (read/write): 00:44:50.810 nvme0n1: ios=516/512, merge=0/0, ticks=1166/328, in_queue=1494, util=84.27% 00:44:50.810 nvme0n2: ios=61/512, merge=0/0, ticks=559/332, in_queue=891, util=90.91% 00:44:50.810 nvme0n3: ios=68/512, merge=0/0, ticks=1247/254, in_queue=1501, util=92.07% 00:44:50.810 nvme0n4: ios=205/512, merge=0/0, ticks=541/354, in_queue=895, util=97.22% 00:44:50.810 18:11:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:44:50.810 [global] 00:44:50.810 thread=1 00:44:50.810 invalidate=1 00:44:50.810 rw=randwrite 00:44:50.810 time_based=1 00:44:50.810 runtime=1 00:44:50.810 ioengine=libaio 00:44:50.810 direct=1 00:44:50.810 bs=4096 00:44:50.810 iodepth=1 00:44:50.810 norandommap=0 00:44:50.810 numjobs=1 00:44:50.810 00:44:50.810 verify_dump=1 00:44:50.810 verify_backlog=512 00:44:50.810 verify_state_save=0 00:44:50.810 do_verify=1 00:44:50.810 verify=crc32c-intel 00:44:50.810 [job0] 00:44:50.810 filename=/dev/nvme0n1 00:44:50.810 [job1] 00:44:50.810 filename=/dev/nvme0n2 00:44:50.810 [job2] 00:44:50.810 filename=/dev/nvme0n3 00:44:50.810 [job3] 00:44:50.810 filename=/dev/nvme0n4 00:44:50.810 Could not set queue depth (nvme0n1) 00:44:50.810 Could not set queue depth (nvme0n2) 00:44:50.810 Could not set queue depth (nvme0n3) 00:44:50.810 Could not set queue depth (nvme0n4) 00:44:51.075 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:51.075 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:51.075 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:51.075 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:51.075 fio-3.35 00:44:51.075 Starting 4 threads 00:44:52.475 00:44:52.475 job0: (groupid=0, jobs=1): err= 0: pid=3025117: Wed Nov 20 18:11:51 2024 00:44:52.475 read: IOPS=19, BW=77.4KiB/s (79.2kB/s)(80.0KiB/1034msec) 00:44:52.475 slat (nsec): min=10434, max=27430, avg=25977.00, stdev=3665.76 00:44:52.475 clat (usec): min=819, max=41890, avg=39134.92, stdev=9024.88 00:44:52.475 lat (usec): min=846, max=41918, avg=39160.89, stdev=9024.62 00:44:52.475 clat percentiles (usec): 00:44:52.475 | 1.00th=[ 824], 5.00th=[ 824], 10.00th=[40633], 20.00th=[41157], 00:44:52.475 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:44:52.475 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:44:52.475 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:44:52.475 | 99.99th=[41681] 00:44:52.475 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:44:52.475 slat (nsec): min=9740, max=51690, avg=28105.09, stdev=10409.08 00:44:52.475 clat (usec): min=216, max=637, avg=453.03, stdev=71.31 00:44:52.475 lat (usec): min=237, max=666, avg=481.13, stdev=75.40 00:44:52.475 clat percentiles (usec): 00:44:52.475 | 1.00th=[ 273], 5.00th=[ 326], 10.00th=[ 359], 20.00th=[ 383], 00:44:52.475 | 30.00th=[ 429], 40.00th=[ 453], 50.00th=[ 465], 60.00th=[ 478], 00:44:52.475 | 70.00th=[ 490], 80.00th=[ 510], 90.00th=[ 537], 95.00th=[ 562], 00:44:52.475 | 99.00th=[ 603], 99.50th=[ 619], 99.90th=[ 635], 99.95th=[ 635], 00:44:52.475 | 99.99th=[ 635] 00:44:52.475 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:44:52.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:52.475 lat (usec) : 250=0.38%, 500=71.99%, 750=23.87%, 1000=0.19% 00:44:52.475 lat (msec) : 50=3.57% 00:44:52.475 cpu : usr=1.26%, sys=0.87%, ctx=535, majf=0, minf=1 00:44:52.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:52.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:52.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:52.475 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:52.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:52.475 job1: (groupid=0, jobs=1): err= 0: pid=3025118: Wed Nov 20 18:11:51 2024 00:44:52.475 read: IOPS=506, BW=2025KiB/s (2074kB/s)(2076KiB/1025msec) 00:44:52.475 slat (nsec): min=6836, max=58086, avg=22764.87, stdev=8243.15 00:44:52.475 clat (usec): min=173, max=41986, avg=1133.71, stdev=4756.93 00:44:52.475 lat (usec): min=188, max=42012, avg=1156.47, stdev=4757.43 00:44:52.475 clat percentiles (usec): 00:44:52.475 | 1.00th=[ 314], 5.00th=[ 449], 10.00th=[ 486], 20.00th=[ 523], 00:44:52.475 | 30.00th=[ 545], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 603], 00:44:52.475 | 70.00th=[ 619], 80.00th=[ 644], 90.00th=[ 685], 95.00th=[ 709], 00:44:52.475 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:52.475 | 99.99th=[42206] 00:44:52.475 write: IOPS=999, BW=3996KiB/s (4092kB/s)(4096KiB/1025msec); 0 zone resets 00:44:52.475 slat (nsec): min=9103, max=65729, avg=28291.43, stdev=9240.76 00:44:52.475 clat (usec): min=115, max=829, avg=374.58, stdev=100.65 00:44:52.475 lat (usec): min=125, max=862, avg=402.87, stdev=101.65 00:44:52.475 clat percentiles (usec): 00:44:52.476 | 1.00th=[ 155], 5.00th=[ 245], 10.00th=[ 265], 20.00th=[ 293], 00:44:52.476 | 30.00th=[ 322], 40.00th=[ 347], 50.00th=[ 367], 60.00th=[ 392], 00:44:52.476 | 70.00th=[ 416], 80.00th=[ 437], 90.00th=[ 486], 95.00th=[ 553], 00:44:52.476 | 99.00th=[ 693], 99.50th=[ 725], 99.90th=[ 791], 99.95th=[ 832], 00:44:52.476 | 99.99th=[ 832] 00:44:52.476 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=2 00:44:52.476 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:44:52.476 lat (usec) : 250=4.15%, 500=60.73%, 750=34.02%, 1000=0.65% 00:44:52.476 lat (msec) : 50=0.45% 00:44:52.476 cpu : usr=1.86%, sys=4.49%, ctx=1543, majf=0, minf=2 00:44:52.476 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:52.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:52.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:52.476 issued rwts: total=519,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:52.476 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:52.476 job2: (groupid=0, jobs=1): err= 0: pid=3025119: Wed Nov 20 18:11:51 2024 00:44:52.476 read: IOPS=296, BW=1185KiB/s (1213kB/s)(1200KiB/1013msec) 00:44:52.476 slat (nsec): min=8349, max=47026, avg=26887.21, stdev=3599.72 00:44:52.476 clat (usec): min=503, max=42039, avg=2161.34, stdev=6557.03 00:44:52.476 lat (usec): min=531, max=42067, avg=2188.23, stdev=6556.93 00:44:52.476 clat percentiles (usec): 00:44:52.476 | 1.00th=[ 570], 5.00th=[ 807], 10.00th=[ 914], 20.00th=[ 1029], 00:44:52.476 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:44:52.476 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1254], 00:44:52.476 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:52.476 | 99.99th=[42206] 00:44:52.476 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:44:52.476 slat (nsec): min=9923, max=51960, avg=31093.54, stdev=8581.23 00:44:52.476 clat (usec): min=345, max=1016, avg=650.42, stdev=123.69 00:44:52.476 lat (usec): min=356, max=1048, avg=681.52, stdev=127.68 00:44:52.476 clat percentiles (usec): 00:44:52.476 | 1.00th=[ 371], 5.00th=[ 416], 10.00th=[ 469], 20.00th=[ 553], 00:44:52.476 | 30.00th=[ 594], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 701], 00:44:52.476 | 70.00th=[ 734], 80.00th=[ 758], 90.00th=[ 799], 95.00th=[ 824], 00:44:52.476 | 99.00th=[ 889], 99.50th=[ 922], 99.90th=[ 1020], 99.95th=[ 1020], 00:44:52.476 | 99.99th=[ 1020] 00:44:52.476 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:44:52.476 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:52.476 lat (usec) : 500=8.62%, 750=40.89%, 1000=19.33% 00:44:52.476 lat (msec) : 2=30.17%, 50=0.99% 00:44:52.476 cpu : usr=1.09%, sys=2.57%, ctx=813, majf=0, minf=1 00:44:52.476 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:52.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:52.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:52.476 issued rwts: total=300,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:52.476 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:52.476 job3: (groupid=0, jobs=1): err= 0: pid=3025120: Wed Nov 20 18:11:51 2024 00:44:52.476 read: IOPS=139, BW=558KiB/s (572kB/s)(560KiB/1003msec) 00:44:52.476 slat (nsec): min=2579, max=59971, avg=25459.79, stdev=6417.52 00:44:52.476 clat (usec): min=548, max=42204, avg=4605.10, stdev=11474.01 00:44:52.476 lat (usec): min=551, max=42231, avg=4630.56, stdev=11473.98 00:44:52.476 clat percentiles (usec): 00:44:52.476 | 1.00th=[ 652], 5.00th=[ 865], 10.00th=[ 914], 20.00th=[ 979], 00:44:52.476 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1156], 00:44:52.476 | 70.00th=[ 1188], 80.00th=[ 1221], 90.00th=[ 1369], 95.00th=[41681], 00:44:52.476 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:52.476 | 99.99th=[42206] 00:44:52.476 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:44:52.476 slat (nsec): min=9848, max=52037, avg=31254.51, stdev=8360.88 00:44:52.476 clat (usec): min=300, max=1040, avg=649.83, stdev=135.58 00:44:52.476 lat (usec): min=312, max=1073, avg=681.09, stdev=138.78 00:44:52.476 clat percentiles (usec): 00:44:52.476 | 1.00th=[ 375], 5.00th=[ 424], 10.00th=[ 474], 20.00th=[ 529], 00:44:52.476 | 30.00th=[ 578], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 693], 00:44:52.476 | 70.00th=[ 725], 80.00th=[ 766], 90.00th=[ 816], 95.00th=[ 873], 00:44:52.476 | 99.00th=[ 963], 99.50th=[ 1012], 99.90th=[ 1037], 99.95th=[ 1037], 00:44:52.476 | 99.99th=[ 1037] 00:44:52.476 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:44:52.476 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:52.476 lat (usec) : 500=11.81%, 750=48.62%, 1000=22.09% 00:44:52.476 lat (msec) : 2=15.49%, 4=0.15%, 50=1.84% 00:44:52.476 cpu : usr=1.00%, sys=2.00%, ctx=653, majf=0, minf=1 00:44:52.476 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:52.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:52.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:52.476 issued rwts: total=140,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:52.476 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:52.476 00:44:52.476 Run status group 0 (all jobs): 00:44:52.476 READ: bw=3787KiB/s (3878kB/s), 77.4KiB/s-2025KiB/s (79.2kB/s-2074kB/s), io=3916KiB (4010kB), run=1003-1034msec 00:44:52.476 WRITE: bw=9903KiB/s (10.1MB/s), 1981KiB/s-3996KiB/s (2028kB/s-4092kB/s), io=10.0MiB (10.5MB), run=1003-1034msec 00:44:52.476 00:44:52.476 Disk stats (read/write): 00:44:52.476 nvme0n1: ios=47/512, merge=0/0, ticks=1439/229, in_queue=1668, util=98.70% 00:44:52.476 nvme0n2: ios=554/1024, merge=0/0, ticks=482/356, in_queue=838, util=89.09% 00:44:52.476 nvme0n3: ios=339/512, merge=0/0, ticks=1436/323, in_queue=1759, util=98.21% 00:44:52.476 nvme0n4: ios=179/512, merge=0/0, ticks=1108/313, in_queue=1421, util=98.19% 00:44:52.476 18:11:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:44:52.476 [global] 00:44:52.476 thread=1 00:44:52.476 invalidate=1 00:44:52.476 rw=write 00:44:52.476 time_based=1 00:44:52.476 runtime=1 00:44:52.476 ioengine=libaio 00:44:52.476 direct=1 00:44:52.476 bs=4096 00:44:52.476 iodepth=128 00:44:52.476 norandommap=0 00:44:52.476 numjobs=1 00:44:52.476 00:44:52.476 verify_dump=1 00:44:52.476 verify_backlog=512 00:44:52.476 verify_state_save=0 00:44:52.476 do_verify=1 00:44:52.476 verify=crc32c-intel 00:44:52.476 [job0] 00:44:52.476 filename=/dev/nvme0n1 00:44:52.476 [job1] 00:44:52.476 filename=/dev/nvme0n2 00:44:52.476 [job2] 00:44:52.476 filename=/dev/nvme0n3 00:44:52.476 [job3] 00:44:52.476 filename=/dev/nvme0n4 00:44:52.476 Could not set queue depth (nvme0n1) 00:44:52.476 Could not set queue depth (nvme0n2) 00:44:52.476 Could not set queue depth (nvme0n3) 00:44:52.476 Could not set queue depth (nvme0n4) 00:44:52.742 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:52.742 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:52.742 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:52.742 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:52.742 fio-3.35 00:44:52.742 Starting 4 threads 00:44:54.144 00:44:54.144 job0: (groupid=0, jobs=1): err= 0: pid=3025640: Wed Nov 20 18:11:53 2024 00:44:54.144 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:44:54.144 slat (nsec): min=940, max=11007k, avg=113919.23, stdev=802551.36 00:44:54.144 clat (usec): min=4786, max=67694, avg=14647.21, stdev=8502.41 00:44:54.144 lat (usec): min=4791, max=67702, avg=14761.13, stdev=8569.06 00:44:54.144 clat percentiles (usec): 00:44:54.144 | 1.00th=[ 5866], 5.00th=[ 7504], 10.00th=[ 7832], 20.00th=[ 8586], 00:44:54.144 | 30.00th=[ 9503], 40.00th=[11338], 50.00th=[13042], 60.00th=[14484], 00:44:54.144 | 70.00th=[15664], 80.00th=[17695], 90.00th=[22152], 95.00th=[27657], 00:44:54.144 | 99.00th=[51119], 99.50th=[62653], 99.90th=[67634], 99.95th=[67634], 00:44:54.144 | 99.99th=[67634] 00:44:54.144 write: IOPS=4777, BW=18.7MiB/s (19.6MB/s)(18.9MiB/1011msec); 0 zone resets 00:44:54.144 slat (nsec): min=1637, max=11003k, avg=93142.72, stdev=662888.22 00:44:54.144 clat (usec): min=1145, max=67666, avg=12583.90, stdev=6879.11 00:44:54.144 lat (usec): min=1153, max=67668, avg=12677.04, stdev=6898.81 00:44:54.144 clat percentiles (usec): 00:44:54.144 | 1.00th=[ 4555], 5.00th=[ 6128], 10.00th=[ 7111], 20.00th=[ 8029], 00:44:54.144 | 30.00th=[ 9110], 40.00th=[10028], 50.00th=[10945], 60.00th=[11863], 00:44:54.144 | 70.00th=[14222], 80.00th=[15270], 90.00th=[19268], 95.00th=[25297], 00:44:54.144 | 99.00th=[39584], 99.50th=[51643], 99.90th=[62129], 99.95th=[67634], 00:44:54.144 | 99.99th=[67634] 00:44:54.144 bw ( KiB/s): min=17144, max=20480, per=19.18%, avg=18812.00, stdev=2358.91, samples=2 00:44:54.144 iops : min= 4286, max= 5120, avg=4703.00, stdev=589.73, samples=2 00:44:54.144 lat (msec) : 2=0.10%, 10=36.42%, 20=52.52%, 50=10.13%, 100=0.84% 00:44:54.144 cpu : usr=2.67%, sys=5.64%, ctx=310, majf=0, minf=1 00:44:54.144 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:44:54.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:54.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:54.144 issued rwts: total=4608,4830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:54.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:54.144 job1: (groupid=0, jobs=1): err= 0: pid=3025641: Wed Nov 20 18:11:53 2024 00:44:54.144 read: IOPS=8601, BW=33.6MiB/s (35.2MB/s)(34.0MiB/1011msec) 00:44:54.144 slat (nsec): min=923, max=8855.0k, avg=56585.88, stdev=417751.81 00:44:54.144 clat (usec): min=2061, max=26601, avg=7698.36, stdev=2927.25 00:44:54.144 lat (usec): min=2070, max=26602, avg=7754.95, stdev=2944.50 00:44:54.144 clat percentiles (usec): 00:44:54.144 | 1.00th=[ 3523], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 5735], 00:44:54.144 | 30.00th=[ 6259], 40.00th=[ 6587], 50.00th=[ 7046], 60.00th=[ 7701], 00:44:54.144 | 70.00th=[ 8455], 80.00th=[ 9503], 90.00th=[10683], 95.00th=[12125], 00:44:54.144 | 99.00th=[21103], 99.50th=[26608], 99.90th=[26608], 99.95th=[26608], 00:44:54.144 | 99.99th=[26608] 00:44:54.144 write: IOPS=8609, BW=33.6MiB/s (35.3MB/s)(34.0MiB/1011msec); 0 zone resets 00:44:54.144 slat (nsec): min=1600, max=12436k, avg=52508.96, stdev=383496.03 00:44:54.144 clat (usec): min=1321, max=41861, avg=7012.90, stdev=3413.27 00:44:54.144 lat (usec): min=1331, max=41893, avg=7065.41, stdev=3433.40 00:44:54.144 clat percentiles (usec): 00:44:54.144 | 1.00th=[ 2573], 5.00th=[ 3982], 10.00th=[ 4424], 20.00th=[ 5276], 00:44:54.144 | 30.00th=[ 5866], 40.00th=[ 6390], 50.00th=[ 6783], 60.00th=[ 7046], 00:44:54.144 | 70.00th=[ 7177], 80.00th=[ 7635], 90.00th=[ 8848], 95.00th=[10159], 00:44:54.144 | 99.00th=[25560], 99.50th=[31851], 99.90th=[32375], 99.95th=[32375], 00:44:54.144 | 99.99th=[41681] 00:44:54.144 bw ( KiB/s): min=32768, max=36864, per=35.50%, avg=34816.00, stdev=2896.31, samples=2 00:44:54.144 iops : min= 8192, max= 9216, avg=8704.00, stdev=724.08, samples=2 00:44:54.144 lat (msec) : 2=0.16%, 4=3.64%, 10=86.13%, 20=8.50%, 50=1.57% 00:44:54.144 cpu : usr=4.85%, sys=7.52%, ctx=639, majf=0, minf=1 00:44:54.144 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:44:54.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:54.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:54.144 issued rwts: total=8696,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:54.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:54.144 job2: (groupid=0, jobs=1): err= 0: pid=3025642: Wed Nov 20 18:11:53 2024 00:44:54.144 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:44:54.144 slat (nsec): min=1009, max=12892k, avg=116263.98, stdev=847898.01 00:44:54.144 clat (usec): min=2648, max=32621, avg=15342.85, stdev=5389.96 00:44:54.144 lat (usec): min=2685, max=32627, avg=15459.11, stdev=5423.56 00:44:54.144 clat percentiles (usec): 00:44:54.144 | 1.00th=[ 4490], 5.00th=[ 7439], 10.00th=[ 8848], 20.00th=[10945], 00:44:54.144 | 30.00th=[12518], 40.00th=[13698], 50.00th=[15139], 60.00th=[16057], 00:44:54.144 | 70.00th=[16909], 80.00th=[19530], 90.00th=[22414], 95.00th=[25297], 00:44:54.144 | 99.00th=[31065], 99.50th=[31065], 99.90th=[32113], 99.95th=[32113], 00:44:54.144 | 99.99th=[32637] 00:44:54.144 write: IOPS=4050, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:44:54.144 slat (nsec): min=1708, max=11179k, avg=134667.79, stdev=815376.12 00:44:54.144 clat (usec): min=584, max=81539, avg=17714.38, stdev=16686.51 00:44:54.144 lat (usec): min=619, max=81547, avg=17849.05, stdev=16795.46 00:44:54.144 clat percentiles (usec): 00:44:54.144 | 1.00th=[ 1123], 5.00th=[ 4080], 10.00th=[ 6063], 20.00th=[ 7242], 00:44:54.144 | 30.00th=[ 9896], 40.00th=[11207], 50.00th=[12256], 60.00th=[14222], 00:44:54.144 | 70.00th=[16319], 80.00th=[20841], 90.00th=[39060], 95.00th=[63177], 00:44:54.144 | 99.00th=[78119], 99.50th=[79168], 99.90th=[81265], 99.95th=[81265], 00:44:54.144 | 99.99th=[81265] 00:44:54.144 bw ( KiB/s): min=15288, max=16384, per=16.15%, avg=15836.00, stdev=774.99, samples=2 00:44:54.144 iops : min= 3822, max= 4096, avg=3959.00, stdev=193.75, samples=2 00:44:54.144 lat (usec) : 750=0.03%, 1000=0.10% 00:44:54.144 lat (msec) : 2=1.68%, 4=1.24%, 10=20.62%, 20=56.28%, 50=15.96% 00:44:54.144 lat (msec) : 100=4.09% 00:44:54.144 cpu : usr=3.08%, sys=4.07%, ctx=288, majf=0, minf=1 00:44:54.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:44:54.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:54.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:54.145 issued rwts: total=3584,4087,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:54.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:54.145 job3: (groupid=0, jobs=1): err= 0: pid=3025643: Wed Nov 20 18:11:53 2024 00:44:54.145 read: IOPS=6415, BW=25.1MiB/s (26.3MB/s)(25.2MiB/1005msec) 00:44:54.145 slat (nsec): min=986, max=9597.6k, avg=62475.81, stdev=476270.38 00:44:54.145 clat (usec): min=612, max=55017, avg=9985.21, stdev=4955.19 00:44:54.145 lat (usec): min=639, max=55024, avg=10047.68, stdev=4986.16 00:44:54.145 clat percentiles (usec): 00:44:54.145 | 1.00th=[ 2474], 5.00th=[ 4359], 10.00th=[ 5538], 20.00th=[ 6652], 00:44:54.145 | 30.00th=[ 7439], 40.00th=[ 7767], 50.00th=[ 8291], 60.00th=[ 9634], 00:44:54.145 | 70.00th=[10945], 80.00th=[12911], 90.00th=[16712], 95.00th=[20317], 00:44:54.145 | 99.00th=[26084], 99.50th=[26346], 99.90th=[33817], 99.95th=[54789], 00:44:54.145 | 99.99th=[54789] 00:44:54.145 write: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec); 0 zone resets 00:44:54.145 slat (nsec): min=1591, max=18493k, avg=51899.99, stdev=484844.50 00:44:54.145 clat (usec): min=378, max=113060, avg=8831.92, stdev=9229.96 00:44:54.145 lat (usec): min=411, max=113070, avg=8883.82, stdev=9242.05 00:44:54.145 clat percentiles (usec): 00:44:54.145 | 1.00th=[ 1942], 5.00th=[ 3359], 10.00th=[ 4555], 20.00th=[ 5342], 00:44:54.145 | 30.00th=[ 6194], 40.00th=[ 6915], 50.00th=[ 7439], 60.00th=[ 7898], 00:44:54.145 | 70.00th=[ 8291], 80.00th=[ 9372], 90.00th=[ 11469], 95.00th=[ 18220], 00:44:54.145 | 99.00th=[ 56886], 99.50th=[ 93848], 99.90th=[106431], 99.95th=[106431], 00:44:54.145 | 99.99th=[112722] 00:44:54.145 bw ( KiB/s): min=24576, max=32768, per=29.23%, avg=28672.00, stdev=5792.62, samples=2 00:44:54.145 iops : min= 6144, max= 8192, avg=7168.00, stdev=1448.15, samples=2 00:44:54.145 lat (usec) : 500=0.03%, 750=0.01%, 1000=0.05% 00:44:54.145 lat (msec) : 2=0.98%, 4=5.24%, 10=66.31%, 20=22.77%, 50=4.02% 00:44:54.145 lat (msec) : 100=0.44%, 250=0.14% 00:44:54.145 cpu : usr=5.68%, sys=7.97%, ctx=504, majf=0, minf=1 00:44:54.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:44:54.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:54.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:54.145 issued rwts: total=6448,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:54.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:54.145 00:44:54.145 Run status group 0 (all jobs): 00:44:54.145 READ: bw=90.2MiB/s (94.5MB/s), 13.9MiB/s-33.6MiB/s (14.5MB/s-35.2MB/s), io=91.2MiB (95.6MB), run=1005-1011msec 00:44:54.145 WRITE: bw=95.8MiB/s (100MB/s), 15.8MiB/s-33.6MiB/s (16.6MB/s-35.3MB/s), io=96.8MiB (102MB), run=1005-1011msec 00:44:54.145 00:44:54.145 Disk stats (read/write): 00:44:54.145 nvme0n1: ios=3634/3727, merge=0/0, ticks=50933/43443, in_queue=94376, util=94.09% 00:44:54.145 nvme0n2: ios=6232/6656, merge=0/0, ticks=44841/42697, in_queue=87538, util=97.41% 00:44:54.145 nvme0n3: ios=3091/3247, merge=0/0, ticks=44192/48901, in_queue=93093, util=97.66% 00:44:54.145 nvme0n4: ios=5277/6144, merge=0/0, ticks=39800/44938, in_queue=84738, util=88.81% 00:44:54.145 18:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:44:54.145 [global] 00:44:54.145 thread=1 00:44:54.145 invalidate=1 00:44:54.145 rw=randwrite 00:44:54.145 time_based=1 00:44:54.145 runtime=1 00:44:54.145 ioengine=libaio 00:44:54.145 direct=1 00:44:54.145 bs=4096 00:44:54.145 iodepth=128 00:44:54.145 norandommap=0 00:44:54.145 numjobs=1 00:44:54.145 00:44:54.145 verify_dump=1 00:44:54.145 verify_backlog=512 00:44:54.145 verify_state_save=0 00:44:54.145 do_verify=1 00:44:54.145 verify=crc32c-intel 00:44:54.145 [job0] 00:44:54.145 filename=/dev/nvme0n1 00:44:54.145 [job1] 00:44:54.145 filename=/dev/nvme0n2 00:44:54.145 [job2] 00:44:54.145 filename=/dev/nvme0n3 00:44:54.145 [job3] 00:44:54.145 filename=/dev/nvme0n4 00:44:54.145 Could not set queue depth (nvme0n1) 00:44:54.145 Could not set queue depth (nvme0n2) 00:44:54.145 Could not set queue depth (nvme0n3) 00:44:54.145 Could not set queue depth (nvme0n4) 00:44:54.405 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:54.405 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:54.405 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:54.405 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:54.405 fio-3.35 00:44:54.405 Starting 4 threads 00:44:55.852 00:44:55.852 job0: (groupid=0, jobs=1): err= 0: pid=3026158: Wed Nov 20 18:11:55 2024 00:44:55.852 read: IOPS=7773, BW=30.4MiB/s (31.8MB/s)(30.5MiB/1004msec) 00:44:55.852 slat (nsec): min=944, max=7649.7k, avg=61625.45, stdev=417446.77 00:44:55.852 clat (usec): min=1648, max=15724, avg=8025.31, stdev=1756.47 00:44:55.852 lat (usec): min=3159, max=15728, avg=8086.94, stdev=1777.65 00:44:55.852 clat percentiles (usec): 00:44:55.852 | 1.00th=[ 4490], 5.00th=[ 5538], 10.00th=[ 6128], 20.00th=[ 6652], 00:44:55.852 | 30.00th=[ 7177], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 8094], 00:44:55.852 | 70.00th=[ 8586], 80.00th=[ 9372], 90.00th=[10421], 95.00th=[11207], 00:44:55.852 | 99.00th=[13435], 99.50th=[14222], 99.90th=[15139], 99.95th=[15664], 00:44:55.852 | 99.99th=[15664] 00:44:55.852 write: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec); 0 zone resets 00:44:55.852 slat (nsec): min=1564, max=16227k, avg=57821.53, stdev=393743.47 00:44:55.852 clat (usec): min=1018, max=23124, avg=7876.92, stdev=2567.20 00:44:55.852 lat (usec): min=1027, max=23129, avg=7934.74, stdev=2580.93 00:44:55.852 clat percentiles (usec): 00:44:55.852 | 1.00th=[ 3818], 5.00th=[ 4621], 10.00th=[ 5342], 20.00th=[ 6456], 00:44:55.852 | 30.00th=[ 6980], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 7832], 00:44:55.852 | 70.00th=[ 8094], 80.00th=[ 8717], 90.00th=[10159], 95.00th=[11600], 00:44:55.852 | 99.00th=[17695], 99.50th=[22938], 99.90th=[23200], 99.95th=[23200], 00:44:55.852 | 99.99th=[23200] 00:44:55.852 bw ( KiB/s): min=32640, max=32872, per=29.91%, avg=32756.00, stdev=164.05, samples=2 00:44:55.852 iops : min= 8160, max= 8218, avg=8189.00, stdev=41.01, samples=2 00:44:55.852 lat (msec) : 2=0.12%, 4=0.82%, 10=86.97%, 20=11.65%, 50=0.44% 00:44:55.852 cpu : usr=4.59%, sys=6.98%, ctx=716, majf=0, minf=2 00:44:55.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:44:55.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:55.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:55.852 issued rwts: total=7805,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:55.852 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:55.852 job1: (groupid=0, jobs=1): err= 0: pid=3026159: Wed Nov 20 18:11:55 2024 00:44:55.852 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:44:55.852 slat (nsec): min=947, max=14891k, avg=72997.46, stdev=470827.26 00:44:55.852 clat (usec): min=2130, max=66328, avg=9936.49, stdev=4870.96 00:44:55.852 lat (usec): min=2170, max=66333, avg=10009.49, stdev=4896.93 00:44:55.852 clat percentiles (usec): 00:44:55.852 | 1.00th=[ 4490], 5.00th=[ 6194], 10.00th=[ 7701], 20.00th=[ 8225], 00:44:55.852 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:44:55.852 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[11207], 95.00th=[16188], 00:44:55.852 | 99.00th=[38536], 99.50th=[40109], 99.90th=[62129], 99.95th=[62129], 00:44:55.852 | 99.99th=[66323] 00:44:55.852 write: IOPS=6150, BW=24.0MiB/s (25.2MB/s)(24.1MiB/1003msec); 0 zone resets 00:44:55.852 slat (nsec): min=1566, max=39068k, avg=83818.76, stdev=756420.52 00:44:55.852 clat (usec): min=1261, max=64998, avg=10460.31, stdev=8010.30 00:44:55.852 lat (usec): min=2461, max=65007, avg=10544.13, stdev=8068.64 00:44:55.852 clat percentiles (usec): 00:44:55.852 | 1.00th=[ 3064], 5.00th=[ 5145], 10.00th=[ 7111], 20.00th=[ 7963], 00:44:55.852 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8979], 00:44:55.852 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[12125], 95.00th=[25035], 00:44:55.852 | 99.00th=[52167], 99.50th=[59507], 99.90th=[64750], 99.95th=[64750], 00:44:55.852 | 99.99th=[64750] 00:44:55.852 bw ( KiB/s): min=20480, max=28672, per=22.44%, avg=24576.00, stdev=5792.62, samples=2 00:44:55.852 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:44:55.852 lat (msec) : 2=0.01%, 4=1.64%, 10=78.99%, 20=14.60%, 50=3.63% 00:44:55.852 lat (msec) : 100=1.13% 00:44:55.852 cpu : usr=3.89%, sys=5.39%, ctx=581, majf=0, minf=1 00:44:55.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:44:55.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:55.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:55.852 issued rwts: total=6144,6169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:55.852 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:55.852 job2: (groupid=0, jobs=1): err= 0: pid=3026160: Wed Nov 20 18:11:55 2024 00:44:55.853 read: IOPS=6509, BW=25.4MiB/s (26.7MB/s)(25.5MiB/1003msec) 00:44:55.853 slat (nsec): min=951, max=6219.5k, avg=77263.11, stdev=413065.02 00:44:55.853 clat (usec): min=1155, max=15562, avg=9825.27, stdev=1602.54 00:44:55.853 lat (usec): min=3335, max=18713, avg=9902.54, stdev=1615.14 00:44:55.853 clat percentiles (usec): 00:44:55.853 | 1.00th=[ 5538], 5.00th=[ 7177], 10.00th=[ 8160], 20.00th=[ 8717], 00:44:55.853 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10290], 00:44:55.853 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11600], 95.00th=[12125], 00:44:55.853 | 99.00th=[14353], 99.50th=[15270], 99.90th=[15533], 99.95th=[15533], 00:44:55.853 | 99.99th=[15533] 00:44:55.853 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:44:55.853 slat (nsec): min=1582, max=6364.3k, avg=71059.23, stdev=390315.60 00:44:55.853 clat (usec): min=1193, max=14432, avg=9449.05, stdev=1576.72 00:44:55.853 lat (usec): min=1202, max=15319, avg=9520.11, stdev=1588.59 00:44:55.853 clat percentiles (usec): 00:44:55.853 | 1.00th=[ 4621], 5.00th=[ 6587], 10.00th=[ 7963], 20.00th=[ 8356], 00:44:55.853 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9634], 00:44:55.853 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11469], 95.00th=[11994], 00:44:55.853 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13435], 99.95th=[13435], 00:44:55.853 | 99.99th=[14484] 00:44:55.853 bw ( KiB/s): min=24576, max=28672, per=24.31%, avg=26624.00, stdev=2896.31, samples=2 00:44:55.853 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:44:55.853 lat (msec) : 2=0.06%, 4=0.24%, 10=60.20%, 20=39.49% 00:44:55.853 cpu : usr=2.69%, sys=5.29%, ctx=734, majf=0, minf=1 00:44:55.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:44:55.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:55.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:55.853 issued rwts: total=6529,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:55.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:55.853 job3: (groupid=0, jobs=1): err= 0: pid=3026161: Wed Nov 20 18:11:55 2024 00:44:55.853 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:44:55.853 slat (nsec): min=961, max=9689.4k, avg=79284.18, stdev=500468.61 00:44:55.853 clat (usec): min=3730, max=23369, avg=10472.36, stdev=2736.25 00:44:55.853 lat (usec): min=3732, max=23375, avg=10551.64, stdev=2744.34 00:44:55.853 clat percentiles (usec): 00:44:55.853 | 1.00th=[ 5342], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 8455], 00:44:55.853 | 30.00th=[ 8848], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[10814], 00:44:55.853 | 70.00th=[11338], 80.00th=[11994], 90.00th=[13304], 95.00th=[16319], 00:44:55.853 | 99.00th=[19792], 99.50th=[20055], 99.90th=[23200], 99.95th=[23462], 00:44:55.853 | 99.99th=[23462] 00:44:55.853 write: IOPS=6460, BW=25.2MiB/s (26.5MB/s)(25.3MiB/1002msec); 0 zone resets 00:44:55.853 slat (nsec): min=1601, max=11078k, avg=75168.31, stdev=467185.25 00:44:55.853 clat (usec): min=1177, max=17342, avg=9676.58, stdev=2353.55 00:44:55.853 lat (usec): min=1187, max=17356, avg=9751.75, stdev=2357.41 00:44:55.853 clat percentiles (usec): 00:44:55.853 | 1.00th=[ 4359], 5.00th=[ 5800], 10.00th=[ 6783], 20.00th=[ 8225], 00:44:55.853 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:44:55.853 | 70.00th=[10683], 80.00th=[11600], 90.00th=[12256], 95.00th=[14222], 00:44:55.853 | 99.00th=[16712], 99.50th=[17171], 99.90th=[17433], 99.95th=[17433], 00:44:55.853 | 99.99th=[17433] 00:44:55.853 bw ( KiB/s): min=24776, max=25992, per=23.18%, avg=25384.00, stdev=859.84, samples=2 00:44:55.853 iops : min= 6194, max= 6498, avg=6346.00, stdev=214.96, samples=2 00:44:55.853 lat (msec) : 2=0.08%, 4=0.29%, 10=52.73%, 20=46.48%, 50=0.42% 00:44:55.853 cpu : usr=4.10%, sys=4.70%, ctx=633, majf=0, minf=1 00:44:55.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:44:55.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:55.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:55.853 issued rwts: total=6144,6473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:55.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:55.853 00:44:55.853 Run status group 0 (all jobs): 00:44:55.853 READ: bw=104MiB/s (109MB/s), 23.9MiB/s-30.4MiB/s (25.1MB/s-31.8MB/s), io=104MiB (109MB), run=1002-1004msec 00:44:55.853 WRITE: bw=107MiB/s (112MB/s), 24.0MiB/s-31.9MiB/s (25.2MB/s-33.4MB/s), io=107MiB (113MB), run=1002-1004msec 00:44:55.853 00:44:55.853 Disk stats (read/write): 00:44:55.853 nvme0n1: ios=6523/6656, merge=0/0, ticks=36758/34656, in_queue=71414, util=86.77% 00:44:55.853 nvme0n2: ios=4843/5120, merge=0/0, ticks=21842/21122, in_queue=42964, util=90.21% 00:44:55.853 nvme0n3: ios=5653/5633, merge=0/0, ticks=22708/21233, in_queue=43941, util=92.08% 00:44:55.853 nvme0n4: ios=5175/5401, merge=0/0, ticks=36627/34748, in_queue=71375, util=96.90% 00:44:55.853 18:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:44:55.853 18:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3026342 00:44:55.853 18:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:44:55.853 18:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:44:55.853 [global] 00:44:55.853 thread=1 00:44:55.853 invalidate=1 00:44:55.853 rw=read 00:44:55.853 time_based=1 00:44:55.853 runtime=10 00:44:55.853 ioengine=libaio 00:44:55.853 direct=1 00:44:55.853 bs=4096 00:44:55.853 iodepth=1 00:44:55.853 norandommap=1 00:44:55.853 numjobs=1 00:44:55.853 00:44:55.853 [job0] 00:44:55.853 filename=/dev/nvme0n1 00:44:55.853 [job1] 00:44:55.853 filename=/dev/nvme0n2 00:44:55.853 [job2] 00:44:55.853 filename=/dev/nvme0n3 00:44:55.853 [job3] 00:44:55.853 filename=/dev/nvme0n4 00:44:55.853 Could not set queue depth (nvme0n1) 00:44:55.853 Could not set queue depth (nvme0n2) 00:44:55.853 Could not set queue depth (nvme0n3) 00:44:55.853 Could not set queue depth (nvme0n4) 00:44:56.113 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:56.113 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:56.113 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:56.113 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:56.113 fio-3.35 00:44:56.113 Starting 4 threads 00:44:58.657 18:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:44:58.657 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=3862528, buflen=4096 00:44:58.657 fio: pid=3026682, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:58.918 18:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:44:58.918 18:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:58.918 18:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:44:58.918 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=8912896, buflen=4096 00:44:58.918 fio: pid=3026681, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:59.178 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=9977856, buflen=4096 00:44:59.178 fio: pid=3026679, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:59.178 18:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:59.178 18:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:44:59.439 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=10625024, buflen=4096 00:44:59.439 fio: pid=3026680, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:59.439 18:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:59.439 18:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:44:59.439 00:44:59.439 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3026679: Wed Nov 20 18:11:59 2024 00:44:59.439 read: IOPS=841, BW=3366KiB/s (3447kB/s)(9744KiB/2895msec) 00:44:59.439 slat (usec): min=6, max=13033, avg=32.22, stdev=263.66 00:44:59.439 clat (usec): min=664, max=42053, avg=1138.63, stdev=2256.15 00:44:59.439 lat (usec): min=690, max=42079, avg=1170.85, stdev=2333.18 00:44:59.439 clat percentiles (usec): 00:44:59.439 | 1.00th=[ 783], 5.00th=[ 865], 10.00th=[ 914], 20.00th=[ 963], 00:44:59.439 | 30.00th=[ 988], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1029], 00:44:59.439 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1090], 95.00th=[ 1123], 00:44:59.439 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[42206], 99.95th=[42206], 00:44:59.439 | 99.99th=[42206] 00:44:59.439 bw ( KiB/s): min= 2864, max= 3872, per=34.25%, avg=3632.00, stdev=430.55, samples=5 00:44:59.439 iops : min= 716, max= 968, avg=908.00, stdev=107.64, samples=5 00:44:59.439 lat (usec) : 750=0.33%, 1000=38.04% 00:44:59.439 lat (msec) : 2=61.22%, 4=0.04%, 50=0.33% 00:44:59.439 cpu : usr=1.24%, sys=3.63%, ctx=2440, majf=0, minf=1 00:44:59.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:59.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:59.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:59.439 issued rwts: total=2437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:59.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:59.439 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3026680: Wed Nov 20 18:11:59 2024 00:44:59.439 read: IOPS=844, BW=3375KiB/s (3456kB/s)(10.1MiB/3074msec) 00:44:59.439 slat (usec): min=6, max=26241, avg=58.35, stdev=717.55 00:44:59.439 clat (usec): min=655, max=2438, avg=1111.43, stdev=105.57 00:44:59.439 lat (usec): min=682, max=27326, avg=1169.79, stdev=723.30 00:44:59.439 clat percentiles (usec): 00:44:59.439 | 1.00th=[ 807], 5.00th=[ 914], 10.00th=[ 979], 20.00th=[ 1045], 00:44:59.439 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1156], 00:44:59.439 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1237], 00:44:59.439 | 99.00th=[ 1287], 99.50th=[ 1336], 99.90th=[ 1401], 99.95th=[ 2376], 00:44:59.439 | 99.99th=[ 2442] 00:44:59.439 bw ( KiB/s): min= 3234, max= 3520, per=32.11%, avg=3405.67, stdev=99.05, samples=6 00:44:59.439 iops : min= 808, max= 880, avg=851.33, stdev=24.94, samples=6 00:44:59.439 lat (usec) : 750=0.23%, 1000=13.03% 00:44:59.439 lat (msec) : 2=86.63%, 4=0.08% 00:44:59.439 cpu : usr=1.53%, sys=3.42%, ctx=2602, majf=0, minf=2 00:44:59.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:59.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:59.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:59.439 issued rwts: total=2595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:59.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:59.439 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3026681: Wed Nov 20 18:11:59 2024 00:44:59.439 read: IOPS=793, BW=3172KiB/s (3248kB/s)(8704KiB/2744msec) 00:44:59.439 slat (usec): min=6, max=234, avg=27.54, stdev= 5.64 00:44:59.439 clat (usec): min=516, max=42027, avg=1216.93, stdev=2895.25 00:44:59.439 lat (usec): min=543, max=42053, avg=1244.47, stdev=2896.51 00:44:59.439 clat percentiles (usec): 00:44:59.439 | 1.00th=[ 742], 5.00th=[ 832], 10.00th=[ 889], 20.00th=[ 955], 00:44:59.439 | 30.00th=[ 988], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1037], 00:44:59.439 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:44:59.439 | 99.00th=[ 1205], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:44:59.439 | 99.99th=[42206] 00:44:59.439 bw ( KiB/s): min= 2008, max= 3880, per=32.74%, avg=3472.00, stdev=819.01, samples=5 00:44:59.439 iops : min= 502, max= 970, avg=868.00, stdev=204.75, samples=5 00:44:59.439 lat (usec) : 750=1.24%, 1000=34.91% 00:44:59.439 lat (msec) : 2=63.25%, 4=0.05%, 50=0.51% 00:44:59.439 cpu : usr=1.02%, sys=3.61%, ctx=2178, majf=0, minf=2 00:44:59.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:59.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:59.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:59.439 issued rwts: total=2177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:59.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:59.439 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3026682: Wed Nov 20 18:11:59 2024 00:44:59.439 read: IOPS=373, BW=1491KiB/s (1527kB/s)(3772KiB/2530msec) 00:44:59.439 slat (nsec): min=2952, max=62490, avg=22350.90, stdev=9361.90 00:44:59.439 clat (usec): min=280, max=42080, avg=2632.02, stdev=8480.60 00:44:59.439 lat (usec): min=288, max=42107, avg=2654.36, stdev=8481.73 00:44:59.439 clat percentiles (usec): 00:44:59.439 | 1.00th=[ 383], 5.00th=[ 529], 10.00th=[ 578], 20.00th=[ 619], 00:44:59.439 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 750], 60.00th=[ 807], 00:44:59.439 | 70.00th=[ 848], 80.00th=[ 938], 90.00th=[ 1172], 95.00th=[ 1647], 00:44:59.439 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:44:59.439 | 99.99th=[42206] 00:44:59.440 bw ( KiB/s): min= 312, max= 4288, per=14.16%, avg=1502.40, stdev=1653.83, samples=5 00:44:59.440 iops : min= 78, max= 1072, avg=375.60, stdev=413.46, samples=5 00:44:59.440 lat (usec) : 500=3.71%, 750=45.76%, 1000=32.42% 00:44:59.440 lat (msec) : 2=13.35%, 4=0.11%, 50=4.56% 00:44:59.440 cpu : usr=0.47%, sys=1.34%, ctx=944, majf=0, minf=2 00:44:59.440 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:59.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:59.440 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:59.440 issued rwts: total=944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:59.440 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:59.440 00:44:59.440 Run status group 0 (all jobs): 00:44:59.440 READ: bw=10.4MiB/s (10.9MB/s), 1491KiB/s-3375KiB/s (1527kB/s-3456kB/s), io=31.8MiB (33.4MB), run=2530-3074msec 00:44:59.440 00:44:59.440 Disk stats (read/write): 00:44:59.440 nvme0n1: ios=2364/0, merge=0/0, ticks=2522/0, in_queue=2522, util=92.82% 00:44:59.440 nvme0n2: ios=2585/0, merge=0/0, ticks=2592/0, in_queue=2592, util=92.23% 00:44:59.440 nvme0n3: ios=2171/0, merge=0/0, ticks=2242/0, in_queue=2242, util=95.64% 00:44:59.440 nvme0n4: ios=893/0, merge=0/0, ticks=2206/0, in_queue=2206, util=96.02% 00:44:59.440 18:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:59.440 18:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:44:59.699 18:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:59.699 18:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:44:59.959 18:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:59.959 18:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:44:59.959 18:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:59.959 18:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:45:00.218 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:45:00.218 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3026342 00:45:00.218 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:45:00.218 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:45:00.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:45:00.478 nvmf hotplug test: fio failed as expected 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:00.478 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:00.478 rmmod nvme_tcp 00:45:00.738 rmmod nvme_fabrics 00:45:00.738 rmmod nvme_keyring 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 3023042 ']' 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 3023042 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3023042 ']' 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3023042 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3023042 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3023042' 00:45:00.738 killing process with pid 3023042 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3023042 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3023042 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:00.738 18:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:03.282 00:45:03.282 real 0m28.213s 00:45:03.282 user 2m23.384s 00:45:03.282 sys 0m12.323s 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:45:03.282 ************************************ 00:45:03.282 END TEST nvmf_fio_target 00:45:03.282 ************************************ 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:03.282 ************************************ 00:45:03.282 START TEST nvmf_bdevio 00:45:03.282 ************************************ 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:45:03.282 * Looking for test storage... 00:45:03.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:03.282 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:03.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:03.283 --rc genhtml_branch_coverage=1 00:45:03.283 --rc genhtml_function_coverage=1 00:45:03.283 --rc genhtml_legend=1 00:45:03.283 --rc geninfo_all_blocks=1 00:45:03.283 --rc geninfo_unexecuted_blocks=1 00:45:03.283 00:45:03.283 ' 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:03.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:03.283 --rc genhtml_branch_coverage=1 00:45:03.283 --rc genhtml_function_coverage=1 00:45:03.283 --rc genhtml_legend=1 00:45:03.283 --rc geninfo_all_blocks=1 00:45:03.283 --rc geninfo_unexecuted_blocks=1 00:45:03.283 00:45:03.283 ' 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:03.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:03.283 --rc genhtml_branch_coverage=1 00:45:03.283 --rc genhtml_function_coverage=1 00:45:03.283 --rc genhtml_legend=1 00:45:03.283 --rc geninfo_all_blocks=1 00:45:03.283 --rc geninfo_unexecuted_blocks=1 00:45:03.283 00:45:03.283 ' 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:03.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:03.283 --rc genhtml_branch_coverage=1 00:45:03.283 --rc genhtml_function_coverage=1 00:45:03.283 --rc genhtml_legend=1 00:45:03.283 --rc geninfo_all_blocks=1 00:45:03.283 --rc geninfo_unexecuted_blocks=1 00:45:03.283 00:45:03.283 ' 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:03.283 18:12:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:45:03.283 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:45:03.284 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:45:03.284 18:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:45:11.422 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:45:11.422 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:45:11.422 Found net devices under 0000:4b:00.0: cvl_0_0 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:45:11.422 Found net devices under 0000:4b:00.1: cvl_0_1 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:11.422 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:11.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:11.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:45:11.423 00:45:11.423 --- 10.0.0.2 ping statistics --- 00:45:11.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:11.423 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:11.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:11.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:45:11.423 00:45:11.423 --- 10.0.0.1 ping statistics --- 00:45:11.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:11.423 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=3032065 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 3032065 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3032065 ']' 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:11.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:11.423 [2024-11-20 18:12:10.588611] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:45:11.423 [2024-11-20 18:12:10.589766] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:45:11.423 [2024-11-20 18:12:10.589820] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:11.423 [2024-11-20 18:12:10.665041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:11.423 [2024-11-20 18:12:10.730037] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:11.423 [2024-11-20 18:12:10.730104] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:11.423 [2024-11-20 18:12:10.730116] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:11.423 [2024-11-20 18:12:10.730125] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:11.423 [2024-11-20 18:12:10.730133] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:11.423 [2024-11-20 18:12:10.730313] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:45:11.423 [2024-11-20 18:12:10.730475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:45:11.423 [2024-11-20 18:12:10.730635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:45:11.423 [2024-11-20 18:12:10.730636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:45:11.423 [2024-11-20 18:12:10.810753] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:45:11.423 [2024-11-20 18:12:10.811748] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:45:11.423 [2024-11-20 18:12:10.811905] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:45:11.423 [2024-11-20 18:12:10.812576] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:45:11.423 [2024-11-20 18:12:10.812579] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:11.423 [2024-11-20 18:12:10.891684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:11.423 Malloc0 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:11.423 [2024-11-20 18:12:10.972042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:45:11.423 { 00:45:11.423 "params": { 00:45:11.423 "name": "Nvme$subsystem", 00:45:11.423 "trtype": "$TEST_TRANSPORT", 00:45:11.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:11.423 "adrfam": "ipv4", 00:45:11.423 "trsvcid": "$NVMF_PORT", 00:45:11.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:11.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:11.423 "hdgst": ${hdgst:-false}, 00:45:11.423 "ddgst": ${ddgst:-false} 00:45:11.423 }, 00:45:11.423 "method": "bdev_nvme_attach_controller" 00:45:11.423 } 00:45:11.423 EOF 00:45:11.423 )") 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:45:11.423 18:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:45:11.423 "params": { 00:45:11.423 "name": "Nvme1", 00:45:11.423 "trtype": "tcp", 00:45:11.423 "traddr": "10.0.0.2", 00:45:11.423 "adrfam": "ipv4", 00:45:11.423 "trsvcid": "4420", 00:45:11.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:11.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:11.424 "hdgst": false, 00:45:11.424 "ddgst": false 00:45:11.424 }, 00:45:11.424 "method": "bdev_nvme_attach_controller" 00:45:11.424 }' 00:45:11.424 [2024-11-20 18:12:11.028106] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:45:11.424 [2024-11-20 18:12:11.028178] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3032244 ] 00:45:11.424 [2024-11-20 18:12:11.107818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:45:11.424 [2024-11-20 18:12:11.156053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:11.424 [2024-11-20 18:12:11.156213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:45:11.424 [2024-11-20 18:12:11.156236] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:11.685 I/O targets: 00:45:11.685 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:45:11.685 00:45:11.685 00:45:11.685 CUnit - A unit testing framework for C - Version 2.1-3 00:45:11.685 http://cunit.sourceforge.net/ 00:45:11.685 00:45:11.685 00:45:11.685 Suite: bdevio tests on: Nvme1n1 00:45:11.685 Test: blockdev write read block ...passed 00:45:11.685 Test: blockdev write zeroes read block ...passed 00:45:11.685 Test: blockdev write zeroes read no split ...passed 00:45:11.946 Test: blockdev write zeroes read split ...passed 00:45:11.946 Test: blockdev write zeroes read split partial ...passed 00:45:11.946 Test: blockdev reset ...[2024-11-20 18:12:11.633905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:45:11.946 [2024-11-20 18:12:11.633995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147f340 (9): Bad file descriptor 00:45:11.946 [2024-11-20 18:12:11.729395] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:45:11.946 passed 00:45:11.946 Test: blockdev write read 8 blocks ...passed 00:45:11.946 Test: blockdev write read size > 128k ...passed 00:45:11.946 Test: blockdev write read invalid size ...passed 00:45:11.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:45:11.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:45:11.946 Test: blockdev write read max offset ...passed 00:45:12.207 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:45:12.207 Test: blockdev writev readv 8 blocks ...passed 00:45:12.207 Test: blockdev writev readv 30 x 1block ...passed 00:45:12.207 Test: blockdev writev readv block ...passed 00:45:12.207 Test: blockdev writev readv size > 128k ...passed 00:45:12.207 Test: blockdev writev readv size > 128k in two iovs ...passed 00:45:12.207 Test: blockdev comparev and writev ...[2024-11-20 18:12:11.990711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:12.207 [2024-11-20 18:12:11.990774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:45:12.207 [2024-11-20 18:12:11.990791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:12.207 [2024-11-20 18:12:11.990801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:45:12.207 [2024-11-20 18:12:11.991286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:12.207 [2024-11-20 18:12:11.991301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:45:12.207 [2024-11-20 18:12:11.991315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:12.207 [2024-11-20 18:12:11.991323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:45:12.207 [2024-11-20 18:12:11.991802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:12.207 [2024-11-20 18:12:11.991817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:45:12.208 [2024-11-20 18:12:11.991831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:12.208 [2024-11-20 18:12:11.991841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:45:12.208 [2024-11-20 18:12:11.992325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:12.208 [2024-11-20 18:12:11.992340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:45:12.208 [2024-11-20 18:12:11.992354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:12.208 [2024-11-20 18:12:11.992362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:45:12.208 passed 00:45:12.208 Test: blockdev nvme passthru rw ...passed 00:45:12.208 Test: blockdev nvme passthru vendor specific ...[2024-11-20 18:12:12.076496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:12.208 [2024-11-20 18:12:12.076522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:45:12.208 [2024-11-20 18:12:12.076750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:12.208 [2024-11-20 18:12:12.076764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:45:12.208 [2024-11-20 18:12:12.076996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:12.208 [2024-11-20 18:12:12.077007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:45:12.208 [2024-11-20 18:12:12.077237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:12.208 [2024-11-20 18:12:12.077250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:45:12.208 passed 00:45:12.208 Test: blockdev nvme admin passthru ...passed 00:45:12.468 Test: blockdev copy ...passed 00:45:12.468 00:45:12.468 Run Summary: Type Total Ran Passed Failed Inactive 00:45:12.468 suites 1 1 n/a 0 0 00:45:12.468 tests 23 23 23 0 0 00:45:12.468 asserts 152 152 152 0 n/a 00:45:12.468 00:45:12.468 Elapsed time = 1.346 seconds 00:45:12.468 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:12.468 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.468 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:12.469 rmmod nvme_tcp 00:45:12.469 rmmod nvme_fabrics 00:45:12.469 rmmod nvme_keyring 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 3032065 ']' 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 3032065 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3032065 ']' 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3032065 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:12.469 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3032065 00:45:12.729 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:45:12.729 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:45:12.729 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3032065' 00:45:12.729 killing process with pid 3032065 00:45:12.730 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3032065 00:45:12.730 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3032065 00:45:12.730 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:45:12.730 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:45:12.730 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:45:12.730 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:45:12.730 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:45:12.730 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:45:12.730 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:45:12.730 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:12.730 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:12.730 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:12.730 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:12.730 18:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:15.277 18:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:15.277 00:45:15.277 real 0m11.941s 00:45:15.277 user 0m10.655s 00:45:15.277 sys 0m6.639s 00:45:15.277 18:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:15.277 18:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:15.277 ************************************ 00:45:15.277 END TEST nvmf_bdevio 00:45:15.277 ************************************ 00:45:15.277 18:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:45:15.277 00:45:15.277 real 4m56.556s 00:45:15.277 user 10m20.873s 00:45:15.277 sys 2m2.490s 00:45:15.277 18:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:15.277 18:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:15.277 ************************************ 00:45:15.277 END TEST nvmf_target_core_interrupt_mode 00:45:15.277 ************************************ 00:45:15.277 18:12:14 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:45:15.277 18:12:14 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:45:15.277 18:12:14 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:15.277 18:12:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:15.277 ************************************ 00:45:15.277 START TEST nvmf_interrupt 00:45:15.277 ************************************ 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:45:15.277 * Looking for test storage... 00:45:15.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:15.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:15.277 --rc genhtml_branch_coverage=1 00:45:15.277 --rc genhtml_function_coverage=1 00:45:15.277 --rc genhtml_legend=1 00:45:15.277 --rc geninfo_all_blocks=1 00:45:15.277 --rc geninfo_unexecuted_blocks=1 00:45:15.277 00:45:15.277 ' 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:15.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:15.277 --rc genhtml_branch_coverage=1 00:45:15.277 --rc genhtml_function_coverage=1 00:45:15.277 --rc genhtml_legend=1 00:45:15.277 --rc geninfo_all_blocks=1 00:45:15.277 --rc geninfo_unexecuted_blocks=1 00:45:15.277 00:45:15.277 ' 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:15.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:15.277 --rc genhtml_branch_coverage=1 00:45:15.277 --rc genhtml_function_coverage=1 00:45:15.277 --rc genhtml_legend=1 00:45:15.277 --rc geninfo_all_blocks=1 00:45:15.277 --rc geninfo_unexecuted_blocks=1 00:45:15.277 00:45:15.277 ' 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:15.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:15.277 --rc genhtml_branch_coverage=1 00:45:15.277 --rc genhtml_function_coverage=1 00:45:15.277 --rc genhtml_legend=1 00:45:15.277 --rc geninfo_all_blocks=1 00:45:15.277 --rc geninfo_unexecuted_blocks=1 00:45:15.277 00:45:15.277 ' 00:45:15.277 18:12:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # prepare_net_devs 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@434 -- # local -g is_hw=no 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # remove_spdk_ns 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:45:15.278 18:12:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:23.421 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:23.421 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:45:23.421 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:23.421 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:23.421 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:23.421 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:45:23.422 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:45:23.422 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:45:23.422 Found net devices under 0000:4b:00.0: cvl_0_0 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:45:23.422 Found net devices under 0000:4b:00.1: cvl_0_1 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # is_hw=yes 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:23.422 18:12:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:23.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:23.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:45:23.422 00:45:23.422 --- 10.0.0.2 ping statistics --- 00:45:23.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:23.422 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:23.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:23.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:45:23.422 00:45:23.422 --- 10.0.0.1 ping statistics --- 00:45:23.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:23.422 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # return 0 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # nvmfpid=3036574 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # waitforlisten 3036574 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 3036574 ']' 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:23.422 18:12:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:23.423 18:12:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:23.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:23.423 18:12:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:23.423 18:12:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:23.423 [2024-11-20 18:12:22.364869] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:45:23.423 [2024-11-20 18:12:22.365858] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:45:23.423 [2024-11-20 18:12:22.365895] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:23.423 [2024-11-20 18:12:22.447169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:23.423 [2024-11-20 18:12:22.478601] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:23.423 [2024-11-20 18:12:22.478636] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:23.423 [2024-11-20 18:12:22.478644] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:23.423 [2024-11-20 18:12:22.478651] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:23.423 [2024-11-20 18:12:22.478656] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:23.423 [2024-11-20 18:12:22.478790] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:23.423 [2024-11-20 18:12:22.478792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:23.423 [2024-11-20 18:12:22.527256] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:45:23.423 [2024-11-20 18:12:22.527755] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:45:23.423 [2024-11-20 18:12:22.528105] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:45:23.423 5000+0 records in 00:45:23.423 5000+0 records out 00:45:23.423 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0179427 s, 571 MB/s 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:23.423 AIO0 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:23.423 [2024-11-20 18:12:23.295745] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:23.423 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:23.684 [2024-11-20 18:12:23.348211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3036574 0 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3036574 0 idle 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3036574 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3036574 -w 256 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3036574 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.24 reactor_0' 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3036574 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.24 reactor_0 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3036574 1 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3036574 1 idle 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3036574 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:23.684 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:23.685 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:23.685 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:23.685 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:23.685 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:23.685 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:23.685 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3036574 -w 256 00:45:23.685 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3036578 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3036578 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3036927 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3036574 0 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3036574 0 busy 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3036574 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3036574 -w 256 00:45:23.946 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3036574 root 20 0 128.2g 44928 32256 R 80.0 0.0 0:00.36 reactor_0' 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3036574 root 20 0 128.2g 44928 32256 R 80.0 0.0 0:00.36 reactor_0 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=80.0 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=80 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3036574 1 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3036574 1 busy 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3036574 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3036574 -w 256 00:45:24.206 18:12:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:24.206 18:12:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3036578 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.25 reactor_1' 00:45:24.207 18:12:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3036578 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.25 reactor_1 00:45:24.207 18:12:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:24.207 18:12:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:24.207 18:12:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:45:24.207 18:12:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:45:24.207 18:12:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:45:24.207 18:12:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:45:24.207 18:12:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:45:24.207 18:12:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:24.207 18:12:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3036927 00:45:34.210 Initializing NVMe Controllers 00:45:34.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:45:34.210 Controller IO queue size 256, less than required. 00:45:34.210 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:45:34.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:45:34.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:45:34.210 Initialization complete. Launching workers. 00:45:34.210 ======================================================== 00:45:34.210 Latency(us) 00:45:34.210 Device Information : IOPS MiB/s Average min max 00:45:34.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18729.29 73.16 13673.69 4292.80 30499.83 00:45:34.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19934.09 77.87 12844.08 7867.40 28229.45 00:45:34.210 ======================================================== 00:45:34.210 Total : 38663.39 151.03 13245.96 4292.80 30499.83 00:45:34.210 00:45:34.210 [2024-11-20 18:12:33.894913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22762a0 is same with the state(6) to be set 00:45:34.210 18:12:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:45:34.210 18:12:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3036574 0 00:45:34.210 18:12:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3036574 0 idle 00:45:34.210 18:12:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3036574 00:45:34.210 18:12:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:34.210 18:12:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:34.210 18:12:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:34.210 18:12:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:34.210 18:12:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:34.210 18:12:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:34.210 18:12:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:34.210 18:12:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:34.210 18:12:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:34.210 18:12:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3036574 -w 256 00:45:34.210 18:12:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3036574 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.24 reactor_0' 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3036574 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.24 reactor_0 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3036574 1 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3036574 1 idle 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3036574 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3036574 -w 256 00:45:34.210 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:34.471 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3036578 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:45:34.471 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3036578 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:45:34.471 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:34.471 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:34.471 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:34.471 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:34.471 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:34.471 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:34.471 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:34.471 18:12:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:34.471 18:12:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:45:35.041 18:12:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:45:35.041 18:12:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:45:35.041 18:12:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:45:35.041 18:12:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:45:35.041 18:12:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:45:37.586 18:12:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:45:37.586 18:12:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:45:37.586 18:12:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:45:37.586 18:12:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:45:37.586 18:12:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:45:37.586 18:12:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:45:37.587 18:12:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:45:37.587 18:12:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3036574 0 00:45:37.587 18:12:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3036574 0 idle 00:45:37.587 18:12:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3036574 00:45:37.587 18:12:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:37.587 18:12:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:37.587 18:12:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:37.587 18:12:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:37.587 18:12:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:37.587 18:12:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:37.587 18:12:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:37.587 18:12:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:37.587 18:12:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:37.587 18:12:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3036574 -w 256 00:45:37.587 18:12:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3036574 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.59 reactor_0' 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3036574 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.59 reactor_0 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3036574 1 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3036574 1 idle 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3036574 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3036574 -w 256 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3036578 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.13 reactor_1' 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3036578 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.13 reactor_1 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:37.587 18:12:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:45:37.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # nvmfcleanup 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:37.847 rmmod nvme_tcp 00:45:37.847 rmmod nvme_fabrics 00:45:37.847 rmmod nvme_keyring 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@513 -- # '[' -n 3036574 ']' 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # killprocess 3036574 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 3036574 ']' 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 3036574 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3036574 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:37.847 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3036574' 00:45:37.848 killing process with pid 3036574 00:45:37.848 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 3036574 00:45:37.848 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 3036574 00:45:38.108 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:45:38.108 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:45:38.108 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:45:38.108 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:45:38.108 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-save 00:45:38.108 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:45:38.108 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-restore 00:45:38.108 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:38.108 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:38.108 18:12:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:38.108 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:38.108 18:12:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:40.649 18:12:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:40.649 00:45:40.649 real 0m25.136s 00:45:40.649 user 0m40.100s 00:45:40.649 sys 0m9.587s 00:45:40.649 18:12:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:40.649 18:12:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:40.649 ************************************ 00:45:40.649 END TEST nvmf_interrupt 00:45:40.649 ************************************ 00:45:40.649 00:45:40.649 real 37m52.531s 00:45:40.649 user 92m18.871s 00:45:40.649 sys 11m27.115s 00:45:40.649 18:12:39 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:40.649 18:12:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:40.649 ************************************ 00:45:40.649 END TEST nvmf_tcp 00:45:40.649 ************************************ 00:45:40.649 18:12:40 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:45:40.649 18:12:40 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:45:40.649 18:12:40 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:45:40.649 18:12:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:40.649 18:12:40 -- common/autotest_common.sh@10 -- # set +x 00:45:40.649 ************************************ 00:45:40.649 START TEST spdkcli_nvmf_tcp 00:45:40.649 ************************************ 00:45:40.649 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:45:40.649 * Looking for test storage... 00:45:40.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:45:40.649 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:40.649 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:45:40.649 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:40.649 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:40.649 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:40.649 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:40.649 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:40.649 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:45:40.649 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:45:40.649 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:45:40.649 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:45:40.649 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:40.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:40.650 --rc genhtml_branch_coverage=1 00:45:40.650 --rc genhtml_function_coverage=1 00:45:40.650 --rc genhtml_legend=1 00:45:40.650 --rc geninfo_all_blocks=1 00:45:40.650 --rc geninfo_unexecuted_blocks=1 00:45:40.650 00:45:40.650 ' 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:40.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:40.650 --rc genhtml_branch_coverage=1 00:45:40.650 --rc genhtml_function_coverage=1 00:45:40.650 --rc genhtml_legend=1 00:45:40.650 --rc geninfo_all_blocks=1 00:45:40.650 --rc geninfo_unexecuted_blocks=1 00:45:40.650 00:45:40.650 ' 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:40.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:40.650 --rc genhtml_branch_coverage=1 00:45:40.650 --rc genhtml_function_coverage=1 00:45:40.650 --rc genhtml_legend=1 00:45:40.650 --rc geninfo_all_blocks=1 00:45:40.650 --rc geninfo_unexecuted_blocks=1 00:45:40.650 00:45:40.650 ' 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:40.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:40.650 --rc genhtml_branch_coverage=1 00:45:40.650 --rc genhtml_function_coverage=1 00:45:40.650 --rc genhtml_legend=1 00:45:40.650 --rc geninfo_all_blocks=1 00:45:40.650 --rc geninfo_unexecuted_blocks=1 00:45:40.650 00:45:40.650 ' 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:40.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3040064 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3040064 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 3040064 ']' 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:40.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:40.650 18:12:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:40.650 [2024-11-20 18:12:40.311784] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:45:40.650 [2024-11-20 18:12:40.311841] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3040064 ] 00:45:40.650 [2024-11-20 18:12:40.387460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:40.650 [2024-11-20 18:12:40.419962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:40.650 [2024-11-20 18:12:40.419972] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:41.221 18:12:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:41.221 18:12:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:45:41.221 18:12:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:45:41.221 18:12:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:41.221 18:12:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:41.221 18:12:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:45:41.221 18:12:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:45:41.221 18:12:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:45:41.221 18:12:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:41.221 18:12:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:41.482 18:12:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:45:41.482 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:45:41.482 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:45:41.482 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:45:41.482 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:45:41.482 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:45:41.482 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:45:41.482 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:45:41.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:45:41.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:45:41.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:41.482 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:41.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:45:41.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:41.482 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:41.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:45:41.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:41.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:45:41.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:45:41.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:41.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:45:41.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:45:41.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:45:41.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:45:41.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:41.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:45:41.482 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:45:41.482 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:45:41.482 ' 00:45:44.081 [2024-11-20 18:12:43.846185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:45.464 [2024-11-20 18:12:45.206314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:45:48.007 [2024-11-20 18:12:47.733433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:45:50.550 [2024-11-20 18:12:49.951794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:45:51.933 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:45:51.933 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:45:51.933 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:45:51.933 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:45:51.933 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:45:51.933 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:45:51.933 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:45:51.933 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:45:51.933 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:45:51.933 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:45:51.933 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:51.933 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:51.933 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:45:51.933 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:51.933 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:51.933 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:45:51.933 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:51.933 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:45:51.933 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:45:51.933 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:51.933 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:45:51.933 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:45:51.933 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:45:51.933 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:45:51.933 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:51.933 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:45:51.933 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:45:51.933 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:45:51.933 18:12:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:45:51.933 18:12:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:51.933 18:12:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:51.933 18:12:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:45:51.933 18:12:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:51.933 18:12:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:51.933 18:12:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:45:51.933 18:12:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:45:52.505 18:12:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:45:52.505 18:12:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:45:52.505 18:12:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:45:52.505 18:12:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:52.505 18:12:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:52.505 18:12:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:45:52.505 18:12:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:52.505 18:12:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:52.505 18:12:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:45:52.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:45:52.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:45:52.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:45:52.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:45:52.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:45:52.505 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:45:52.505 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:45:52.505 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:45:52.505 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:45:52.505 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:45:52.505 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:45:52.505 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:45:52.505 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:45:52.505 ' 00:45:59.084 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:45:59.084 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:45:59.084 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:45:59.084 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:45:59.084 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:45:59.084 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:45:59.084 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:45:59.084 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:45:59.084 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:45:59.084 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:45:59.084 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:45:59.084 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:45:59.084 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:45:59.084 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:45:59.084 18:12:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:45:59.084 18:12:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:59.084 18:12:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:59.084 18:12:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3040064 00:45:59.084 18:12:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3040064 ']' 00:45:59.084 18:12:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3040064 00:45:59.084 18:12:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:45:59.084 18:12:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:59.084 18:12:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3040064 00:45:59.084 18:12:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:59.084 18:12:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:59.084 18:12:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3040064' 00:45:59.084 killing process with pid 3040064 00:45:59.084 18:12:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 3040064 00:45:59.084 18:12:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 3040064 00:45:59.084 18:12:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:45:59.084 18:12:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:45:59.084 18:12:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3040064 ']' 00:45:59.084 18:12:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3040064 00:45:59.084 18:12:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3040064 ']' 00:45:59.084 18:12:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3040064 00:45:59.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3040064) - No such process 00:45:59.084 18:12:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 3040064 is not found' 00:45:59.084 Process with pid 3040064 is not found 00:45:59.084 18:12:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:45:59.084 18:12:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:45:59.084 18:12:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:45:59.084 00:45:59.084 real 0m18.121s 00:45:59.084 user 0m40.292s 00:45:59.084 sys 0m0.849s 00:45:59.084 18:12:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:59.084 18:12:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:59.084 ************************************ 00:45:59.084 END TEST spdkcli_nvmf_tcp 00:45:59.084 ************************************ 00:45:59.084 18:12:58 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:45:59.084 18:12:58 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:45:59.084 18:12:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:59.084 18:12:58 -- common/autotest_common.sh@10 -- # set +x 00:45:59.084 ************************************ 00:45:59.084 START TEST nvmf_identify_passthru 00:45:59.084 ************************************ 00:45:59.084 18:12:58 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:45:59.084 * Looking for test storage... 00:45:59.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:59.084 18:12:58 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:59.084 18:12:58 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:45:59.084 18:12:58 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:59.084 18:12:58 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:45:59.084 18:12:58 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:45:59.085 18:12:58 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:59.085 18:12:58 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:59.085 18:12:58 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:45:59.085 18:12:58 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:59.085 18:12:58 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:59.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:59.085 --rc genhtml_branch_coverage=1 00:45:59.085 --rc genhtml_function_coverage=1 00:45:59.085 --rc genhtml_legend=1 00:45:59.085 --rc geninfo_all_blocks=1 00:45:59.085 --rc geninfo_unexecuted_blocks=1 00:45:59.085 00:45:59.085 ' 00:45:59.085 18:12:58 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:59.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:59.085 --rc genhtml_branch_coverage=1 00:45:59.085 --rc genhtml_function_coverage=1 00:45:59.085 --rc genhtml_legend=1 00:45:59.085 --rc geninfo_all_blocks=1 00:45:59.085 --rc geninfo_unexecuted_blocks=1 00:45:59.085 00:45:59.085 ' 00:45:59.085 18:12:58 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:59.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:59.085 --rc genhtml_branch_coverage=1 00:45:59.085 --rc genhtml_function_coverage=1 00:45:59.085 --rc genhtml_legend=1 00:45:59.085 --rc geninfo_all_blocks=1 00:45:59.085 --rc geninfo_unexecuted_blocks=1 00:45:59.085 00:45:59.085 ' 00:45:59.085 18:12:58 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:59.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:59.085 --rc genhtml_branch_coverage=1 00:45:59.085 --rc genhtml_function_coverage=1 00:45:59.085 --rc genhtml_legend=1 00:45:59.085 --rc geninfo_all_blocks=1 00:45:59.085 --rc geninfo_unexecuted_blocks=1 00:45:59.085 00:45:59.085 ' 00:45:59.085 18:12:58 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:59.085 18:12:58 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:45:59.085 18:12:58 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:59.085 18:12:58 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:59.085 18:12:58 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:59.085 18:12:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:59.085 18:12:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:59.085 18:12:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:59.085 18:12:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:45:59.085 18:12:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:59.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:59.085 18:12:58 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:59.085 18:12:58 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:45:59.085 18:12:58 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:59.085 18:12:58 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:59.085 18:12:58 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:59.085 18:12:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:59.085 18:12:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:59.085 18:12:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:59.085 18:12:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:45:59.085 18:12:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:59.085 18:12:58 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@472 -- # prepare_net_devs 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@434 -- # local -g is_hw=no 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@436 -- # remove_spdk_ns 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:59.085 18:12:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:59.085 18:12:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:45:59.085 18:12:58 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:45:59.085 18:12:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:46:05.665 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:46:05.665 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:46:05.665 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:46:05.666 Found net devices under 0000:4b:00.0: cvl_0_0 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:46:05.666 Found net devices under 0000:4b:00.1: cvl_0_1 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@438 -- # is_hw=yes 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:05.666 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:05.927 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:05.927 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:05.927 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:05.927 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:05.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:05.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:46:05.927 00:46:05.927 --- 10.0.0.2 ping statistics --- 00:46:05.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:05.927 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:46:05.927 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:05.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:05.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:46:05.927 00:46:05.927 --- 10.0.0.1 ping statistics --- 00:46:05.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:05.927 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:46:05.927 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:05.927 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@446 -- # return 0 00:46:05.927 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:46:05.927 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:05.927 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:46:05.927 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:46:05.927 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:05.927 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:46:05.927 18:13:05 nvmf_identify_passthru -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:46:05.927 18:13:05 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:46:05.927 18:13:05 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:05.927 18:13:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:05.927 18:13:05 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:46:05.927 18:13:05 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:46:05.927 18:13:05 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:46:05.927 18:13:05 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:46:05.927 18:13:05 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:46:05.927 18:13:05 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:46:05.927 18:13:05 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:46:05.927 18:13:05 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:46:05.927 18:13:05 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:46:05.927 18:13:05 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:46:05.927 18:13:05 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:46:05.927 18:13:05 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:46:05.927 18:13:05 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:46:05.927 18:13:05 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:46:05.927 18:13:05 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:46:05.927 18:13:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:46:05.927 18:13:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:46:05.927 18:13:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:46:06.497 18:13:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:46:06.497 18:13:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:46:06.497 18:13:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:46:06.497 18:13:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:46:07.067 18:13:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:46:07.067 18:13:06 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:46:07.067 18:13:06 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:07.067 18:13:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:07.067 18:13:06 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:46:07.067 18:13:06 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:07.067 18:13:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:07.067 18:13:06 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3047115 00:46:07.067 18:13:06 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:46:07.067 18:13:06 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:46:07.067 18:13:06 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3047115 00:46:07.067 18:13:06 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 3047115 ']' 00:46:07.067 18:13:06 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:07.067 18:13:06 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:07.067 18:13:06 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:07.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:07.067 18:13:06 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:07.067 18:13:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:07.067 [2024-11-20 18:13:06.848409] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:46:07.067 [2024-11-20 18:13:06.848459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:07.067 [2024-11-20 18:13:06.928198] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:07.067 [2024-11-20 18:13:06.961527] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:07.067 [2024-11-20 18:13:06.961565] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:07.067 [2024-11-20 18:13:06.961574] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:07.067 [2024-11-20 18:13:06.961580] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:07.067 [2024-11-20 18:13:06.961586] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:07.067 [2024-11-20 18:13:06.961725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:46:07.067 [2024-11-20 18:13:06.961877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:46:07.067 [2024-11-20 18:13:06.962031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:46:07.067 [2024-11-20 18:13:06.962033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:46:08.007 18:13:07 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:08.007 18:13:07 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:46:08.007 18:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:46:08.007 18:13:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:08.007 18:13:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:08.007 INFO: Log level set to 20 00:46:08.007 INFO: Requests: 00:46:08.007 { 00:46:08.007 "jsonrpc": "2.0", 00:46:08.007 "method": "nvmf_set_config", 00:46:08.007 "id": 1, 00:46:08.007 "params": { 00:46:08.007 "admin_cmd_passthru": { 00:46:08.007 "identify_ctrlr": true 00:46:08.007 } 00:46:08.007 } 00:46:08.007 } 00:46:08.007 00:46:08.007 INFO: response: 00:46:08.007 { 00:46:08.007 "jsonrpc": "2.0", 00:46:08.007 "id": 1, 00:46:08.007 "result": true 00:46:08.007 } 00:46:08.007 00:46:08.007 18:13:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:08.007 18:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:46:08.007 18:13:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:08.007 18:13:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:08.007 INFO: Setting log level to 20 00:46:08.007 INFO: Setting log level to 20 00:46:08.007 INFO: Log level set to 20 00:46:08.007 INFO: Log level set to 20 00:46:08.007 INFO: Requests: 00:46:08.007 { 00:46:08.007 "jsonrpc": "2.0", 00:46:08.007 "method": "framework_start_init", 00:46:08.007 "id": 1 00:46:08.007 } 00:46:08.007 00:46:08.007 INFO: Requests: 00:46:08.007 { 00:46:08.007 "jsonrpc": "2.0", 00:46:08.007 "method": "framework_start_init", 00:46:08.007 "id": 1 00:46:08.007 } 00:46:08.007 00:46:08.007 [2024-11-20 18:13:07.722307] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:46:08.007 INFO: response: 00:46:08.007 { 00:46:08.007 "jsonrpc": "2.0", 00:46:08.007 "id": 1, 00:46:08.007 "result": true 00:46:08.007 } 00:46:08.007 00:46:08.007 INFO: response: 00:46:08.007 { 00:46:08.007 "jsonrpc": "2.0", 00:46:08.007 "id": 1, 00:46:08.007 "result": true 00:46:08.007 } 00:46:08.007 00:46:08.007 18:13:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:08.007 18:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:08.007 18:13:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:08.007 18:13:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:08.007 INFO: Setting log level to 40 00:46:08.007 INFO: Setting log level to 40 00:46:08.007 INFO: Setting log level to 40 00:46:08.007 [2024-11-20 18:13:07.735628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:08.007 18:13:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:08.007 18:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:46:08.007 18:13:07 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:08.007 18:13:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:08.007 18:13:07 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:46:08.007 18:13:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:08.007 18:13:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:08.267 Nvme0n1 00:46:08.267 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:08.267 18:13:08 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:46:08.267 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:08.267 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:08.267 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:08.267 18:13:08 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:46:08.267 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:08.267 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:08.267 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:08.267 18:13:08 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:08.267 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:08.267 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:08.267 [2024-11-20 18:13:08.120556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:08.267 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:08.267 18:13:08 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:46:08.267 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:08.268 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:08.268 [ 00:46:08.268 { 00:46:08.268 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:46:08.268 "subtype": "Discovery", 00:46:08.268 "listen_addresses": [], 00:46:08.268 "allow_any_host": true, 00:46:08.268 "hosts": [] 00:46:08.268 }, 00:46:08.268 { 00:46:08.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:08.268 "subtype": "NVMe", 00:46:08.268 "listen_addresses": [ 00:46:08.268 { 00:46:08.268 "trtype": "TCP", 00:46:08.268 "adrfam": "IPv4", 00:46:08.268 "traddr": "10.0.0.2", 00:46:08.268 "trsvcid": "4420" 00:46:08.268 } 00:46:08.268 ], 00:46:08.268 "allow_any_host": true, 00:46:08.268 "hosts": [], 00:46:08.268 "serial_number": "SPDK00000000000001", 00:46:08.268 "model_number": "SPDK bdev Controller", 00:46:08.268 "max_namespaces": 1, 00:46:08.268 "min_cntlid": 1, 00:46:08.268 "max_cntlid": 65519, 00:46:08.268 "namespaces": [ 00:46:08.268 { 00:46:08.268 "nsid": 1, 00:46:08.268 "bdev_name": "Nvme0n1", 00:46:08.268 "name": "Nvme0n1", 00:46:08.268 "nguid": "36344730526054870025384500000044", 00:46:08.268 "uuid": "36344730-5260-5487-0025-384500000044" 00:46:08.268 } 00:46:08.268 ] 00:46:08.268 } 00:46:08.268 ] 00:46:08.268 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:08.268 18:13:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:46:08.268 18:13:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:46:08.268 18:13:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:46:08.528 18:13:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:46:08.528 18:13:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:46:08.528 18:13:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:46:08.528 18:13:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:46:08.788 18:13:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:46:08.788 18:13:08 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:46:08.788 18:13:08 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:46:08.788 18:13:08 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:08.788 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:08.788 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:08.788 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:08.788 18:13:08 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:46:08.788 18:13:08 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:46:08.788 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@512 -- # nvmfcleanup 00:46:08.788 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:46:08.788 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:08.789 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:46:08.789 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:08.789 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:08.789 rmmod nvme_tcp 00:46:08.789 rmmod nvme_fabrics 00:46:08.789 rmmod nvme_keyring 00:46:08.789 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:08.789 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:46:08.789 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:46:08.789 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@513 -- # '[' -n 3047115 ']' 00:46:08.789 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@514 -- # killprocess 3047115 00:46:08.789 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 3047115 ']' 00:46:08.789 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 3047115 00:46:08.789 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:46:08.789 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:08.789 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3047115 00:46:08.789 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:08.789 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:08.789 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3047115' 00:46:08.789 killing process with pid 3047115 00:46:08.789 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 3047115 00:46:08.789 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 3047115 00:46:09.049 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:46:09.049 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:46:09.049 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:46:09.049 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:46:09.049 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-restore 00:46:09.049 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-save 00:46:09.049 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:46:09.049 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:09.049 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:09.049 18:13:08 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:09.049 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:09.049 18:13:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:11.110 18:13:10 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:11.110 00:46:11.110 real 0m12.783s 00:46:11.110 user 0m10.064s 00:46:11.110 sys 0m6.277s 00:46:11.110 18:13:10 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:11.110 18:13:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:11.110 ************************************ 00:46:11.110 END TEST nvmf_identify_passthru 00:46:11.110 ************************************ 00:46:11.372 18:13:11 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:46:11.372 18:13:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:11.372 18:13:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:11.372 18:13:11 -- common/autotest_common.sh@10 -- # set +x 00:46:11.372 ************************************ 00:46:11.372 START TEST nvmf_dif 00:46:11.372 ************************************ 00:46:11.372 18:13:11 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:46:11.372 * Looking for test storage... 00:46:11.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:11.372 18:13:11 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:46:11.372 18:13:11 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:46:11.372 18:13:11 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:46:11.372 18:13:11 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:11.372 18:13:11 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:11.373 18:13:11 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:46:11.373 18:13:11 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:11.373 18:13:11 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:46:11.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:11.373 --rc genhtml_branch_coverage=1 00:46:11.373 --rc genhtml_function_coverage=1 00:46:11.373 --rc genhtml_legend=1 00:46:11.373 --rc geninfo_all_blocks=1 00:46:11.373 --rc geninfo_unexecuted_blocks=1 00:46:11.373 00:46:11.373 ' 00:46:11.373 18:13:11 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:46:11.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:11.373 --rc genhtml_branch_coverage=1 00:46:11.373 --rc genhtml_function_coverage=1 00:46:11.373 --rc genhtml_legend=1 00:46:11.373 --rc geninfo_all_blocks=1 00:46:11.373 --rc geninfo_unexecuted_blocks=1 00:46:11.373 00:46:11.373 ' 00:46:11.373 18:13:11 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:46:11.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:11.373 --rc genhtml_branch_coverage=1 00:46:11.373 --rc genhtml_function_coverage=1 00:46:11.373 --rc genhtml_legend=1 00:46:11.373 --rc geninfo_all_blocks=1 00:46:11.373 --rc geninfo_unexecuted_blocks=1 00:46:11.373 00:46:11.373 ' 00:46:11.373 18:13:11 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:46:11.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:11.373 --rc genhtml_branch_coverage=1 00:46:11.373 --rc genhtml_function_coverage=1 00:46:11.373 --rc genhtml_legend=1 00:46:11.373 --rc geninfo_all_blocks=1 00:46:11.373 --rc geninfo_unexecuted_blocks=1 00:46:11.373 00:46:11.373 ' 00:46:11.373 18:13:11 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:11.373 18:13:11 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:46:11.373 18:13:11 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:11.373 18:13:11 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:11.373 18:13:11 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:11.373 18:13:11 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:11.373 18:13:11 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:11.373 18:13:11 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:11.373 18:13:11 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:46:11.373 18:13:11 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:11.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:11.373 18:13:11 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:46:11.373 18:13:11 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:46:11.373 18:13:11 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:46:11.373 18:13:11 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:46:11.373 18:13:11 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:11.373 18:13:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:11.373 18:13:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:46:11.373 18:13:11 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:46:11.373 18:13:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:46:19.509 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:46:19.509 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:46:19.509 Found net devices under 0000:4b:00.0: cvl_0_0 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:46:19.509 Found net devices under 0000:4b:00.1: cvl_0_1 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@438 -- # is_hw=yes 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:19.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:19.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:46:19.509 00:46:19.509 --- 10.0.0.2 ping statistics --- 00:46:19.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:19.509 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:19.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:19.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:46:19.509 00:46:19.509 --- 10.0.0.1 ping statistics --- 00:46:19.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:19.509 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@446 -- # return 0 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:46:19.509 18:13:18 nvmf_dif -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:22.055 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:46:22.055 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:46:22.055 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:46:22.055 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:46:22.055 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:46:22.055 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:46:22.055 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:46:22.055 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:46:22.055 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:46:22.055 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:46:22.055 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:46:22.055 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:46:22.055 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:46:22.055 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:46:22.055 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:46:22.055 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:46:22.055 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:46:22.316 18:13:22 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:22.316 18:13:22 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:46:22.316 18:13:22 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:46:22.316 18:13:22 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:22.316 18:13:22 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:46:22.316 18:13:22 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:46:22.577 18:13:22 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:46:22.577 18:13:22 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:46:22.577 18:13:22 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:46:22.577 18:13:22 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:22.577 18:13:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:22.577 18:13:22 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=3053223 00:46:22.577 18:13:22 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 3053223 00:46:22.577 18:13:22 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:46:22.577 18:13:22 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 3053223 ']' 00:46:22.577 18:13:22 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:22.577 18:13:22 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:22.577 18:13:22 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:22.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:22.577 18:13:22 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:22.577 18:13:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:22.577 [2024-11-20 18:13:22.324354] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:46:22.577 [2024-11-20 18:13:22.324419] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:22.577 [2024-11-20 18:13:22.412683] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:22.577 [2024-11-20 18:13:22.458910] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:22.577 [2024-11-20 18:13:22.458951] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:22.577 [2024-11-20 18:13:22.458959] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:22.577 [2024-11-20 18:13:22.458966] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:22.577 [2024-11-20 18:13:22.458973] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:22.577 [2024-11-20 18:13:22.458993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:46:23.519 18:13:23 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:23.519 18:13:23 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:46:23.519 18:13:23 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:46:23.519 18:13:23 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:23.520 18:13:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:23.520 18:13:23 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:23.520 18:13:23 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:46:23.520 18:13:23 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:46:23.520 18:13:23 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:23.520 18:13:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:23.520 [2024-11-20 18:13:23.167213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:23.520 18:13:23 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:23.520 18:13:23 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:46:23.520 18:13:23 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:23.520 18:13:23 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:23.520 18:13:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:23.520 ************************************ 00:46:23.520 START TEST fio_dif_1_default 00:46:23.520 ************************************ 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:23.520 bdev_null0 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:23.520 [2024-11-20 18:13:23.223516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:46:23.520 { 00:46:23.520 "params": { 00:46:23.520 "name": "Nvme$subsystem", 00:46:23.520 "trtype": "$TEST_TRANSPORT", 00:46:23.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:23.520 "adrfam": "ipv4", 00:46:23.520 "trsvcid": "$NVMF_PORT", 00:46:23.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:23.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:23.520 "hdgst": ${hdgst:-false}, 00:46:23.520 "ddgst": ${ddgst:-false} 00:46:23.520 }, 00:46:23.520 "method": "bdev_nvme_attach_controller" 00:46:23.520 } 00:46:23.520 EOF 00:46:23.520 )") 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:46:23.520 "params": { 00:46:23.520 "name": "Nvme0", 00:46:23.520 "trtype": "tcp", 00:46:23.520 "traddr": "10.0.0.2", 00:46:23.520 "adrfam": "ipv4", 00:46:23.520 "trsvcid": "4420", 00:46:23.520 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:23.520 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:23.520 "hdgst": false, 00:46:23.520 "ddgst": false 00:46:23.520 }, 00:46:23.520 "method": "bdev_nvme_attach_controller" 00:46:23.520 }' 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:23.520 18:13:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:23.780 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:23.780 fio-3.35 00:46:23.780 Starting 1 thread 00:46:36.004 00:46:36.004 filename0: (groupid=0, jobs=1): err= 0: pid=3053749: Wed Nov 20 18:13:34 2024 00:46:36.004 read: IOPS=246, BW=986KiB/s (1010kB/s)(9904KiB/10040msec) 00:46:36.004 slat (nsec): min=5405, max=55189, avg=6622.28, stdev=1723.70 00:46:36.004 clat (usec): min=506, max=43229, avg=16200.51, stdev=19570.80 00:46:36.004 lat (usec): min=512, max=43235, avg=16207.13, stdev=19570.44 00:46:36.004 clat percentiles (usec): 00:46:36.004 | 1.00th=[ 603], 5.00th=[ 766], 10.00th=[ 807], 20.00th=[ 832], 00:46:36.004 | 30.00th=[ 873], 40.00th=[ 963], 50.00th=[ 1004], 60.00th=[ 1037], 00:46:36.004 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:46:36.004 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:46:36.004 | 99.99th=[43254] 00:46:36.004 bw ( KiB/s): min= 704, max= 3968, per=100.00%, avg=988.80, stdev=752.25, samples=20 00:46:36.004 iops : min= 176, max= 992, avg=247.20, stdev=188.06, samples=20 00:46:36.004 lat (usec) : 750=4.40%, 1000=44.75% 00:46:36.004 lat (msec) : 2=12.88%, 50=37.96% 00:46:36.004 cpu : usr=93.78%, sys=5.97%, ctx=12, majf=0, minf=192 00:46:36.004 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:36.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:36.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:36.004 issued rwts: total=2476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:36.004 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:36.004 00:46:36.004 Run status group 0 (all jobs): 00:46:36.004 READ: bw=986KiB/s (1010kB/s), 986KiB/s-986KiB/s (1010kB/s-1010kB/s), io=9904KiB (10.1MB), run=10040-10040msec 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:36.004 00:46:36.004 real 0m11.227s 00:46:36.004 user 0m25.444s 00:46:36.004 sys 0m0.939s 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:36.004 ************************************ 00:46:36.004 END TEST fio_dif_1_default 00:46:36.004 ************************************ 00:46:36.004 18:13:34 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:46:36.004 18:13:34 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:36.004 18:13:34 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:36.004 18:13:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:36.004 ************************************ 00:46:36.004 START TEST fio_dif_1_multi_subsystems 00:46:36.004 ************************************ 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:36.004 bdev_null0 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:36.004 [2024-11-20 18:13:34.498817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:36.004 bdev_null1 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:36.004 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:46:36.004 { 00:46:36.005 "params": { 00:46:36.005 "name": "Nvme$subsystem", 00:46:36.005 "trtype": "$TEST_TRANSPORT", 00:46:36.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:36.005 "adrfam": "ipv4", 00:46:36.005 "trsvcid": "$NVMF_PORT", 00:46:36.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:36.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:36.005 "hdgst": ${hdgst:-false}, 00:46:36.005 "ddgst": ${ddgst:-false} 00:46:36.005 }, 00:46:36.005 "method": "bdev_nvme_attach_controller" 00:46:36.005 } 00:46:36.005 EOF 00:46:36.005 )") 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:46:36.005 { 00:46:36.005 "params": { 00:46:36.005 "name": "Nvme$subsystem", 00:46:36.005 "trtype": "$TEST_TRANSPORT", 00:46:36.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:36.005 "adrfam": "ipv4", 00:46:36.005 "trsvcid": "$NVMF_PORT", 00:46:36.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:36.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:36.005 "hdgst": ${hdgst:-false}, 00:46:36.005 "ddgst": ${ddgst:-false} 00:46:36.005 }, 00:46:36.005 "method": "bdev_nvme_attach_controller" 00:46:36.005 } 00:46:36.005 EOF 00:46:36.005 )") 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:46:36.005 "params": { 00:46:36.005 "name": "Nvme0", 00:46:36.005 "trtype": "tcp", 00:46:36.005 "traddr": "10.0.0.2", 00:46:36.005 "adrfam": "ipv4", 00:46:36.005 "trsvcid": "4420", 00:46:36.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:36.005 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:36.005 "hdgst": false, 00:46:36.005 "ddgst": false 00:46:36.005 }, 00:46:36.005 "method": "bdev_nvme_attach_controller" 00:46:36.005 },{ 00:46:36.005 "params": { 00:46:36.005 "name": "Nvme1", 00:46:36.005 "trtype": "tcp", 00:46:36.005 "traddr": "10.0.0.2", 00:46:36.005 "adrfam": "ipv4", 00:46:36.005 "trsvcid": "4420", 00:46:36.005 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:36.005 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:36.005 "hdgst": false, 00:46:36.005 "ddgst": false 00:46:36.005 }, 00:46:36.005 "method": "bdev_nvme_attach_controller" 00:46:36.005 }' 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:36.005 18:13:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:36.005 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:36.005 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:36.005 fio-3.35 00:46:36.005 Starting 2 threads 00:46:45.998 00:46:45.998 filename0: (groupid=0, jobs=1): err= 0: pid=3055926: Wed Nov 20 18:13:45 2024 00:46:45.998 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10003msec) 00:46:45.998 slat (nsec): min=5438, max=32454, avg=6408.45, stdev=1952.17 00:46:45.998 clat (usec): min=659, max=42494, avg=21084.83, stdev=20152.92 00:46:45.998 lat (usec): min=665, max=42520, avg=21091.23, stdev=20152.86 00:46:45.998 clat percentiles (usec): 00:46:45.998 | 1.00th=[ 725], 5.00th=[ 799], 10.00th=[ 824], 20.00th=[ 840], 00:46:45.998 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[41157], 60.00th=[41157], 00:46:45.998 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:46:45.998 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:46:45.998 | 99.99th=[42730] 00:46:45.998 bw ( KiB/s): min= 672, max= 768, per=66.17%, avg=759.58, stdev=25.78, samples=19 00:46:45.998 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:46:45.998 lat (usec) : 750=1.37%, 1000=46.31% 00:46:45.998 lat (msec) : 2=2.11%, 50=50.21% 00:46:45.998 cpu : usr=95.26%, sys=4.54%, ctx=14, majf=0, minf=207 00:46:45.998 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:45.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:45.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:45.998 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:45.998 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:45.998 filename1: (groupid=0, jobs=1): err= 0: pid=3055927: Wed Nov 20 18:13:45 2024 00:46:45.998 read: IOPS=97, BW=391KiB/s (400kB/s)(3920KiB/10030msec) 00:46:45.998 slat (nsec): min=5390, max=41267, avg=6505.72, stdev=2586.54 00:46:45.998 clat (usec): min=852, max=42728, avg=40918.31, stdev=2583.57 00:46:45.998 lat (usec): min=858, max=42734, avg=40924.81, stdev=2583.72 00:46:45.998 clat percentiles (usec): 00:46:45.998 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:46:45.998 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:46:45.998 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:46:45.998 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:46:45.998 | 99.99th=[42730] 00:46:45.998 bw ( KiB/s): min= 384, max= 416, per=34.00%, avg=390.40, stdev=13.13, samples=20 00:46:45.998 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:46:45.998 lat (usec) : 1000=0.41% 00:46:45.998 lat (msec) : 50=99.59% 00:46:45.998 cpu : usr=95.96%, sys=3.84%, ctx=11, majf=0, minf=84 00:46:45.998 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:45.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:45.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:45.998 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:45.998 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:45.998 00:46:45.998 Run status group 0 (all jobs): 00:46:45.998 READ: bw=1147KiB/s (1174kB/s), 391KiB/s-758KiB/s (400kB/s-776kB/s), io=11.2MiB (11.8MB), run=10003-10030msec 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:46.260 00:46:46.260 real 0m11.597s 00:46:46.260 user 0m38.258s 00:46:46.260 sys 0m1.216s 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:46.260 18:13:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:46.260 ************************************ 00:46:46.260 END TEST fio_dif_1_multi_subsystems 00:46:46.260 ************************************ 00:46:46.260 18:13:46 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:46:46.260 18:13:46 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:46.260 18:13:46 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:46.260 18:13:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:46.260 ************************************ 00:46:46.260 START TEST fio_dif_rand_params 00:46:46.260 ************************************ 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:46.260 bdev_null0 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:46.260 [2024-11-20 18:13:46.149131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:46:46.260 { 00:46:46.260 "params": { 00:46:46.260 "name": "Nvme$subsystem", 00:46:46.260 "trtype": "$TEST_TRANSPORT", 00:46:46.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:46.260 "adrfam": "ipv4", 00:46:46.260 "trsvcid": "$NVMF_PORT", 00:46:46.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:46.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:46.260 "hdgst": ${hdgst:-false}, 00:46:46.260 "ddgst": ${ddgst:-false} 00:46:46.260 }, 00:46:46.260 "method": "bdev_nvme_attach_controller" 00:46:46.260 } 00:46:46.260 EOF 00:46:46.260 )") 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:46:46.260 18:13:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:46:46.260 "params": { 00:46:46.260 "name": "Nvme0", 00:46:46.260 "trtype": "tcp", 00:46:46.260 "traddr": "10.0.0.2", 00:46:46.260 "adrfam": "ipv4", 00:46:46.260 "trsvcid": "4420", 00:46:46.261 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:46.261 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:46.261 "hdgst": false, 00:46:46.261 "ddgst": false 00:46:46.261 }, 00:46:46.261 "method": "bdev_nvme_attach_controller" 00:46:46.261 }' 00:46:46.521 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:46.521 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:46.521 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:46.521 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:46.521 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:46:46.521 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:46.521 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:46.521 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:46.521 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:46.521 18:13:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:46.780 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:46:46.781 ... 00:46:46.781 fio-3.35 00:46:46.781 Starting 3 threads 00:46:53.360 00:46:53.360 filename0: (groupid=0, jobs=1): err= 0: pid=3058179: Wed Nov 20 18:13:52 2024 00:46:53.360 read: IOPS=323, BW=40.4MiB/s (42.4MB/s)(204MiB/5045msec) 00:46:53.360 slat (nsec): min=5457, max=33483, avg=8318.85, stdev=1849.14 00:46:53.360 clat (usec): min=4919, max=89179, avg=9237.66, stdev=5046.26 00:46:53.360 lat (usec): min=4929, max=89185, avg=9245.97, stdev=5046.33 00:46:53.360 clat percentiles (usec): 00:46:53.360 | 1.00th=[ 5866], 5.00th=[ 6915], 10.00th=[ 7242], 20.00th=[ 7701], 00:46:53.360 | 30.00th=[ 8029], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 8979], 00:46:53.360 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10814], 00:46:53.360 | 99.00th=[46924], 99.50th=[49021], 99.90th=[51643], 99.95th=[89654], 00:46:53.360 | 99.99th=[89654] 00:46:53.360 bw ( KiB/s): min=34816, max=45312, per=33.95%, avg=41728.00, stdev=3016.99, samples=10 00:46:53.360 iops : min= 272, max= 354, avg=326.00, stdev=23.57, samples=10 00:46:53.360 lat (msec) : 10=86.15%, 20=12.50%, 50=1.16%, 100=0.18% 00:46:53.360 cpu : usr=94.11%, sys=5.61%, ctx=41, majf=0, minf=104 00:46:53.360 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:53.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:53.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:53.360 issued rwts: total=1632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:53.360 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:53.360 filename0: (groupid=0, jobs=1): err= 0: pid=3058180: Wed Nov 20 18:13:52 2024 00:46:53.360 read: IOPS=319, BW=39.9MiB/s (41.8MB/s)(201MiB/5043msec) 00:46:53.360 slat (nsec): min=5559, max=30877, avg=8146.97, stdev=1776.66 00:46:53.360 clat (usec): min=5543, max=50675, avg=9340.59, stdev=2837.42 00:46:53.360 lat (usec): min=5552, max=50705, avg=9348.74, stdev=2837.65 00:46:53.360 clat percentiles (usec): 00:46:53.360 | 1.00th=[ 6194], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 8094], 00:46:53.360 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:46:53.360 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10945], 00:46:53.360 | 99.00th=[11731], 99.50th=[12256], 99.90th=[48497], 99.95th=[50594], 00:46:53.360 | 99.99th=[50594] 00:46:53.360 bw ( KiB/s): min=37632, max=44288, per=33.49%, avg=41164.80, stdev=1892.02, samples=10 00:46:53.360 iops : min= 294, max= 346, avg=321.60, stdev=14.78, samples=10 00:46:53.360 lat (msec) : 10=74.89%, 20=24.67%, 50=0.37%, 100=0.06% 00:46:53.360 cpu : usr=94.59%, sys=5.20%, ctx=8, majf=0, minf=148 00:46:53.360 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:53.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:53.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:53.360 issued rwts: total=1609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:53.360 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:53.360 filename0: (groupid=0, jobs=1): err= 0: pid=3058181: Wed Nov 20 18:13:52 2024 00:46:53.360 read: IOPS=317, BW=39.7MiB/s (41.6MB/s)(200MiB/5045msec) 00:46:53.360 slat (nsec): min=5443, max=32080, avg=7930.69, stdev=2029.39 00:46:53.360 clat (usec): min=5407, max=51506, avg=9404.06, stdev=4181.39 00:46:53.360 lat (usec): min=5416, max=51512, avg=9411.99, stdev=4181.56 00:46:53.360 clat percentiles (usec): 00:46:53.360 | 1.00th=[ 6063], 5.00th=[ 7046], 10.00th=[ 7504], 20.00th=[ 8094], 00:46:53.360 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9372], 00:46:53.360 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10421], 00:46:53.360 | 99.00th=[46400], 99.50th=[49021], 99.90th=[50070], 99.95th=[51643], 00:46:53.360 | 99.99th=[51643] 00:46:53.360 bw ( KiB/s): min=35840, max=43776, per=33.35%, avg=40985.60, stdev=2150.16, samples=10 00:46:53.360 iops : min= 280, max= 342, avg=320.20, stdev=16.80, samples=10 00:46:53.360 lat (msec) : 10=84.47%, 20=14.47%, 50=0.87%, 100=0.19% 00:46:53.360 cpu : usr=94.13%, sys=5.61%, ctx=7, majf=0, minf=116 00:46:53.360 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:53.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:53.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:53.360 issued rwts: total=1603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:53.360 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:53.360 00:46:53.360 Run status group 0 (all jobs): 00:46:53.360 READ: bw=120MiB/s (126MB/s), 39.7MiB/s-40.4MiB/s (41.6MB/s-42.4MB/s), io=606MiB (635MB), run=5043-5045msec 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.360 bdev_null0 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.360 [2024-11-20 18:13:52.359324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.360 bdev_null1 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.360 bdev_null2 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:53.360 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:46:53.361 { 00:46:53.361 "params": { 00:46:53.361 "name": "Nvme$subsystem", 00:46:53.361 "trtype": "$TEST_TRANSPORT", 00:46:53.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:53.361 "adrfam": "ipv4", 00:46:53.361 "trsvcid": "$NVMF_PORT", 00:46:53.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:53.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:53.361 "hdgst": ${hdgst:-false}, 00:46:53.361 "ddgst": ${ddgst:-false} 00:46:53.361 }, 00:46:53.361 "method": "bdev_nvme_attach_controller" 00:46:53.361 } 00:46:53.361 EOF 00:46:53.361 )") 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:46:53.361 { 00:46:53.361 "params": { 00:46:53.361 "name": "Nvme$subsystem", 00:46:53.361 "trtype": "$TEST_TRANSPORT", 00:46:53.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:53.361 "adrfam": "ipv4", 00:46:53.361 "trsvcid": "$NVMF_PORT", 00:46:53.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:53.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:53.361 "hdgst": ${hdgst:-false}, 00:46:53.361 "ddgst": ${ddgst:-false} 00:46:53.361 }, 00:46:53.361 "method": "bdev_nvme_attach_controller" 00:46:53.361 } 00:46:53.361 EOF 00:46:53.361 )") 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:46:53.361 { 00:46:53.361 "params": { 00:46:53.361 "name": "Nvme$subsystem", 00:46:53.361 "trtype": "$TEST_TRANSPORT", 00:46:53.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:53.361 "adrfam": "ipv4", 00:46:53.361 "trsvcid": "$NVMF_PORT", 00:46:53.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:53.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:53.361 "hdgst": ${hdgst:-false}, 00:46:53.361 "ddgst": ${ddgst:-false} 00:46:53.361 }, 00:46:53.361 "method": "bdev_nvme_attach_controller" 00:46:53.361 } 00:46:53.361 EOF 00:46:53.361 )") 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:46:53.361 "params": { 00:46:53.361 "name": "Nvme0", 00:46:53.361 "trtype": "tcp", 00:46:53.361 "traddr": "10.0.0.2", 00:46:53.361 "adrfam": "ipv4", 00:46:53.361 "trsvcid": "4420", 00:46:53.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:53.361 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:53.361 "hdgst": false, 00:46:53.361 "ddgst": false 00:46:53.361 }, 00:46:53.361 "method": "bdev_nvme_attach_controller" 00:46:53.361 },{ 00:46:53.361 "params": { 00:46:53.361 "name": "Nvme1", 00:46:53.361 "trtype": "tcp", 00:46:53.361 "traddr": "10.0.0.2", 00:46:53.361 "adrfam": "ipv4", 00:46:53.361 "trsvcid": "4420", 00:46:53.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:53.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:53.361 "hdgst": false, 00:46:53.361 "ddgst": false 00:46:53.361 }, 00:46:53.361 "method": "bdev_nvme_attach_controller" 00:46:53.361 },{ 00:46:53.361 "params": { 00:46:53.361 "name": "Nvme2", 00:46:53.361 "trtype": "tcp", 00:46:53.361 "traddr": "10.0.0.2", 00:46:53.361 "adrfam": "ipv4", 00:46:53.361 "trsvcid": "4420", 00:46:53.361 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:46:53.361 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:46:53.361 "hdgst": false, 00:46:53.361 "ddgst": false 00:46:53.361 }, 00:46:53.361 "method": "bdev_nvme_attach_controller" 00:46:53.361 }' 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:53.361 18:13:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:53.361 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:53.361 ... 00:46:53.361 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:53.361 ... 00:46:53.361 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:53.361 ... 00:46:53.361 fio-3.35 00:46:53.361 Starting 24 threads 00:47:05.596 00:47:05.596 filename0: (groupid=0, jobs=1): err= 0: pid=3059579: Wed Nov 20 18:14:03 2024 00:47:05.596 read: IOPS=717, BW=2869KiB/s (2938kB/s)(28.1MiB/10021msec) 00:47:05.596 slat (usec): min=5, max=108, avg=12.28, stdev=11.54 00:47:05.596 clat (usec): min=3439, max=41045, avg=22215.41, stdev=3623.83 00:47:05.596 lat (usec): min=3450, max=41051, avg=22227.68, stdev=3624.87 00:47:05.596 clat percentiles (usec): 00:47:05.596 | 1.00th=[ 7898], 5.00th=[14222], 10.00th=[17433], 20.00th=[21890], 00:47:05.596 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:47:05.596 | 70.00th=[23462], 80.00th=[23725], 90.00th=[23987], 95.00th=[25035], 00:47:05.596 | 99.00th=[32113], 99.50th=[33817], 99.90th=[36963], 99.95th=[41157], 00:47:05.596 | 99.99th=[41157] 00:47:05.596 bw ( KiB/s): min= 2688, max= 3312, per=4.31%, avg=2868.80, stdev=142.55, samples=20 00:47:05.596 iops : min= 672, max= 828, avg=717.20, stdev=35.64, samples=20 00:47:05.596 lat (msec) : 4=0.21%, 10=0.86%, 20=14.52%, 50=84.40% 00:47:05.596 cpu : usr=98.79%, sys=0.88%, ctx=19, majf=0, minf=37 00:47:05.596 IO depths : 1=4.2%, 2=8.6%, 4=18.9%, 8=59.8%, 16=8.6%, 32=0.0%, >=64=0.0% 00:47:05.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.596 complete : 0=0.0%, 4=92.4%, 8=2.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.596 issued rwts: total=7188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.596 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.596 filename0: (groupid=0, jobs=1): err= 0: pid=3059580: Wed Nov 20 18:14:03 2024 00:47:05.596 read: IOPS=688, BW=2752KiB/s (2819kB/s)(26.9MiB/10004msec) 00:47:05.596 slat (usec): min=5, max=176, avg=34.09, stdev=24.42 00:47:05.596 clat (usec): min=9031, max=39721, avg=22903.85, stdev=1916.12 00:47:05.596 lat (usec): min=9037, max=39746, avg=22937.94, stdev=1918.09 00:47:05.596 clat percentiles (usec): 00:47:05.596 | 1.00th=[13829], 5.00th=[21890], 10.00th=[22414], 20.00th=[22414], 00:47:05.596 | 30.00th=[22676], 40.00th=[22676], 50.00th=[22938], 60.00th=[22938], 00:47:05.596 | 70.00th=[23200], 80.00th=[23462], 90.00th=[23987], 95.00th=[24249], 00:47:05.596 | 99.00th=[27132], 99.50th=[32637], 99.90th=[39060], 99.95th=[39584], 00:47:05.596 | 99.99th=[39584] 00:47:05.596 bw ( KiB/s): min= 2560, max= 2896, per=4.12%, avg=2743.58, stdev=83.37, samples=19 00:47:05.596 iops : min= 640, max= 724, avg=685.89, stdev=20.84, samples=19 00:47:05.596 lat (msec) : 10=0.29%, 20=2.75%, 50=96.96% 00:47:05.596 cpu : usr=98.85%, sys=0.72%, ctx=126, majf=0, minf=25 00:47:05.596 IO depths : 1=5.3%, 2=11.2%, 4=23.9%, 8=52.3%, 16=7.2%, 32=0.0%, >=64=0.0% 00:47:05.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.596 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.596 issued rwts: total=6884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.596 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.596 filename0: (groupid=0, jobs=1): err= 0: pid=3059581: Wed Nov 20 18:14:03 2024 00:47:05.596 read: IOPS=696, BW=2785KiB/s (2852kB/s)(27.2MiB/10010msec) 00:47:05.596 slat (usec): min=5, max=182, avg=23.26, stdev=22.61 00:47:05.596 clat (usec): min=8970, max=41989, avg=22789.10, stdev=2664.47 00:47:05.596 lat (usec): min=8980, max=42001, avg=22812.36, stdev=2666.09 00:47:05.596 clat percentiles (usec): 00:47:05.596 | 1.00th=[12911], 5.00th=[17171], 10.00th=[22152], 20.00th=[22676], 00:47:05.596 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:47:05.596 | 70.00th=[23462], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:47:05.596 | 99.00th=[32113], 99.50th=[35390], 99.90th=[41157], 99.95th=[42206], 00:47:05.596 | 99.99th=[42206] 00:47:05.596 bw ( KiB/s): min= 2688, max= 3168, per=4.18%, avg=2781.60, stdev=112.45, samples=20 00:47:05.596 iops : min= 672, max= 792, avg=695.40, stdev=28.11, samples=20 00:47:05.596 lat (msec) : 10=0.09%, 20=7.40%, 50=92.51% 00:47:05.596 cpu : usr=98.96%, sys=0.73%, ctx=23, majf=0, minf=29 00:47:05.596 IO depths : 1=5.6%, 2=11.2%, 4=22.9%, 8=53.4%, 16=7.0%, 32=0.0%, >=64=0.0% 00:47:05.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.596 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.596 issued rwts: total=6970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.596 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.596 filename0: (groupid=0, jobs=1): err= 0: pid=3059582: Wed Nov 20 18:14:03 2024 00:47:05.596 read: IOPS=690, BW=2762KiB/s (2829kB/s)(27.0MiB/10016msec) 00:47:05.596 slat (usec): min=5, max=110, avg=14.53, stdev=13.65 00:47:05.596 clat (usec): min=5368, max=33200, avg=23049.63, stdev=1779.62 00:47:05.596 lat (usec): min=5374, max=33205, avg=23064.16, stdev=1780.23 00:47:05.596 clat percentiles (usec): 00:47:05.596 | 1.00th=[13042], 5.00th=[22152], 10.00th=[22676], 20.00th=[22676], 00:47:05.596 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:47:05.596 | 70.00th=[23462], 80.00th=[23725], 90.00th=[23987], 95.00th=[24511], 00:47:05.596 | 99.00th=[25297], 99.50th=[27395], 99.90th=[33162], 99.95th=[33162], 00:47:05.596 | 99.99th=[33162] 00:47:05.596 bw ( KiB/s): min= 2688, max= 2920, per=4.14%, avg=2760.40, stdev=82.30, samples=20 00:47:05.596 iops : min= 672, max= 730, avg=690.10, stdev=20.58, samples=20 00:47:05.596 lat (msec) : 10=0.19%, 20=2.89%, 50=96.92% 00:47:05.596 cpu : usr=98.91%, sys=0.76%, ctx=17, majf=0, minf=31 00:47:05.596 IO depths : 1=5.6%, 2=11.6%, 4=23.9%, 8=52.0%, 16=6.9%, 32=0.0%, >=64=0.0% 00:47:05.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.596 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.596 issued rwts: total=6917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.596 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.596 filename0: (groupid=0, jobs=1): err= 0: pid=3059583: Wed Nov 20 18:14:03 2024 00:47:05.596 read: IOPS=693, BW=2773KiB/s (2840kB/s)(27.1MiB/10004msec) 00:47:05.596 slat (usec): min=5, max=113, avg=30.47, stdev=17.42 00:47:05.596 clat (usec): min=6551, max=43655, avg=22813.52, stdev=2486.01 00:47:05.596 lat (usec): min=6557, max=43671, avg=22843.99, stdev=2488.95 00:47:05.596 clat percentiles (usec): 00:47:05.596 | 1.00th=[12911], 5.00th=[18744], 10.00th=[22414], 20.00th=[22676], 00:47:05.596 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:47:05.596 | 70.00th=[23200], 80.00th=[23462], 90.00th=[23987], 95.00th=[24511], 00:47:05.596 | 99.00th=[30278], 99.50th=[33817], 99.90th=[43779], 99.95th=[43779], 00:47:05.596 | 99.99th=[43779] 00:47:05.596 bw ( KiB/s): min= 2432, max= 2960, per=4.14%, avg=2758.74, stdev=116.15, samples=19 00:47:05.596 iops : min= 608, max= 740, avg=689.68, stdev=29.04, samples=19 00:47:05.596 lat (msec) : 10=0.46%, 20=5.28%, 50=94.26% 00:47:05.596 cpu : usr=98.98%, sys=0.70%, ctx=16, majf=0, minf=23 00:47:05.596 IO depths : 1=5.1%, 2=10.6%, 4=22.2%, 8=54.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:47:05.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.596 complete : 0=0.0%, 4=93.4%, 8=1.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.596 issued rwts: total=6936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.596 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.596 filename0: (groupid=0, jobs=1): err= 0: pid=3059584: Wed Nov 20 18:14:03 2024 00:47:05.596 read: IOPS=701, BW=2807KiB/s (2874kB/s)(27.4MiB/10012msec) 00:47:05.596 slat (usec): min=5, max=121, avg=22.27, stdev=20.07 00:47:05.596 clat (usec): min=9562, max=42576, avg=22611.82, stdev=3515.95 00:47:05.596 lat (usec): min=9571, max=42605, avg=22634.09, stdev=3518.59 00:47:05.596 clat percentiles (usec): 00:47:05.596 | 1.00th=[12387], 5.00th=[15926], 10.00th=[17695], 20.00th=[22152], 00:47:05.596 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:47:05.596 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24511], 95.00th=[27132], 00:47:05.596 | 99.00th=[34866], 99.50th=[38536], 99.90th=[42206], 99.95th=[42730], 00:47:05.596 | 99.99th=[42730] 00:47:05.596 bw ( KiB/s): min= 2560, max= 3104, per=4.20%, avg=2797.47, stdev=138.18, samples=19 00:47:05.596 iops : min= 640, max= 776, avg=699.37, stdev=34.54, samples=19 00:47:05.596 lat (msec) : 10=0.14%, 20=14.46%, 50=85.40% 00:47:05.596 cpu : usr=99.19%, sys=0.51%, ctx=13, majf=0, minf=19 00:47:05.596 IO depths : 1=3.4%, 2=6.9%, 4=15.9%, 8=63.9%, 16=9.8%, 32=0.0%, >=64=0.0% 00:47:05.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.596 complete : 0=0.0%, 4=91.7%, 8=3.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.596 issued rwts: total=7026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.596 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.596 filename0: (groupid=0, jobs=1): err= 0: pid=3059585: Wed Nov 20 18:14:03 2024 00:47:05.596 read: IOPS=706, BW=2824KiB/s (2892kB/s)(27.6MiB/10002msec) 00:47:05.596 slat (usec): min=5, max=164, avg=26.68, stdev=21.89 00:47:05.596 clat (usec): min=8004, max=42677, avg=22426.50, stdev=3322.79 00:47:05.596 lat (usec): min=8018, max=42698, avg=22453.18, stdev=3326.47 00:47:05.596 clat percentiles (usec): 00:47:05.596 | 1.00th=[13042], 5.00th=[15270], 10.00th=[17695], 20.00th=[22152], 00:47:05.596 | 30.00th=[22676], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:47:05.596 | 70.00th=[23200], 80.00th=[23462], 90.00th=[23987], 95.00th=[25035], 00:47:05.596 | 99.00th=[34341], 99.50th=[36439], 99.90th=[42206], 99.95th=[42730], 00:47:05.596 | 99.99th=[42730] 00:47:05.596 bw ( KiB/s): min= 2656, max= 3158, per=4.24%, avg=2825.58, stdev=154.30, samples=19 00:47:05.597 iops : min= 664, max= 789, avg=706.37, stdev=38.52, samples=19 00:47:05.597 lat (msec) : 10=0.08%, 20=14.30%, 50=85.61% 00:47:05.597 cpu : usr=98.78%, sys=0.75%, ctx=52, majf=0, minf=27 00:47:05.597 IO depths : 1=4.4%, 2=8.9%, 4=19.3%, 8=59.0%, 16=8.4%, 32=0.0%, >=64=0.0% 00:47:05.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.597 complete : 0=0.0%, 4=92.5%, 8=2.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.597 issued rwts: total=7062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.597 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.597 filename0: (groupid=0, jobs=1): err= 0: pid=3059586: Wed Nov 20 18:14:03 2024 00:47:05.597 read: IOPS=686, BW=2747KiB/s (2813kB/s)(26.8MiB/10005msec) 00:47:05.597 slat (usec): min=5, max=166, avg=20.15, stdev=19.08 00:47:05.597 clat (usec): min=3053, max=58393, avg=23188.83, stdev=3901.73 00:47:05.597 lat (usec): min=3060, max=58410, avg=23208.98, stdev=3902.22 00:47:05.597 clat percentiles (usec): 00:47:05.597 | 1.00th=[11076], 5.00th=[16188], 10.00th=[20317], 20.00th=[22414], 00:47:05.597 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:47:05.597 | 70.00th=[23462], 80.00th=[23987], 90.00th=[25560], 95.00th=[30278], 00:47:05.597 | 99.00th=[35390], 99.50th=[36439], 99.90th=[44303], 99.95th=[58459], 00:47:05.597 | 99.99th=[58459] 00:47:05.597 bw ( KiB/s): min= 2480, max= 2848, per=4.11%, avg=2734.32, stdev=88.27, samples=19 00:47:05.597 iops : min= 620, max= 712, avg=683.58, stdev=22.07, samples=19 00:47:05.597 lat (msec) : 4=0.01%, 10=0.63%, 20=8.82%, 50=90.47%, 100=0.07% 00:47:05.597 cpu : usr=98.62%, sys=0.93%, ctx=51, majf=0, minf=25 00:47:05.597 IO depths : 1=0.5%, 2=1.1%, 4=5.7%, 8=78.4%, 16=14.4%, 32=0.0%, >=64=0.0% 00:47:05.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.597 complete : 0=0.0%, 4=89.7%, 8=6.8%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.597 issued rwts: total=6872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.597 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.597 filename1: (groupid=0, jobs=1): err= 0: pid=3059587: Wed Nov 20 18:14:03 2024 00:47:05.597 read: IOPS=691, BW=2767KiB/s (2834kB/s)(27.0MiB/10005msec) 00:47:05.597 slat (usec): min=6, max=145, avg=23.66, stdev=18.63 00:47:05.597 clat (usec): min=7446, max=43421, avg=22912.19, stdev=3273.53 00:47:05.597 lat (usec): min=7454, max=43438, avg=22935.85, stdev=3274.66 00:47:05.597 clat percentiles (usec): 00:47:05.597 | 1.00th=[12125], 5.00th=[16450], 10.00th=[21890], 20.00th=[22414], 00:47:05.597 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:47:05.597 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25560], 00:47:05.597 | 99.00th=[34866], 99.50th=[39584], 99.90th=[43254], 99.95th=[43254], 00:47:05.597 | 99.99th=[43254] 00:47:05.597 bw ( KiB/s): min= 2484, max= 3056, per=4.15%, avg=2766.53, stdev=129.60, samples=19 00:47:05.597 iops : min= 621, max= 764, avg=691.63, stdev=32.40, samples=19 00:47:05.597 lat (msec) : 10=0.59%, 20=6.75%, 50=92.66% 00:47:05.597 cpu : usr=98.25%, sys=1.13%, ctx=98, majf=0, minf=26 00:47:05.597 IO depths : 1=4.7%, 2=9.4%, 4=20.0%, 8=57.6%, 16=8.3%, 32=0.0%, >=64=0.0% 00:47:05.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.597 complete : 0=0.0%, 4=92.8%, 8=1.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.597 issued rwts: total=6922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.597 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.597 filename1: (groupid=0, jobs=1): err= 0: pid=3059588: Wed Nov 20 18:14:03 2024 00:47:05.597 read: IOPS=711, BW=2845KiB/s (2913kB/s)(27.8MiB/10002msec) 00:47:05.597 slat (usec): min=5, max=155, avg=15.91, stdev=16.35 00:47:05.597 clat (usec): min=6997, max=41002, avg=22370.32, stdev=3155.25 00:47:05.597 lat (usec): min=7002, max=41032, avg=22386.24, stdev=3156.96 00:47:05.597 clat percentiles (usec): 00:47:05.597 | 1.00th=[12125], 5.00th=[15139], 10.00th=[17957], 20.00th=[22414], 00:47:05.597 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:47:05.597 | 70.00th=[23462], 80.00th=[23725], 90.00th=[23987], 95.00th=[24773], 00:47:05.597 | 99.00th=[31589], 99.50th=[34866], 99.90th=[40109], 99.95th=[41157], 00:47:05.597 | 99.99th=[41157] 00:47:05.597 bw ( KiB/s): min= 2688, max= 3264, per=4.27%, avg=2847.16, stdev=170.73, samples=19 00:47:05.597 iops : min= 672, max= 816, avg=711.79, stdev=42.68, samples=19 00:47:05.597 lat (msec) : 10=0.39%, 20=13.17%, 50=86.44% 00:47:05.597 cpu : usr=98.80%, sys=0.88%, ctx=15, majf=0, minf=33 00:47:05.597 IO depths : 1=4.7%, 2=9.7%, 4=21.3%, 8=56.4%, 16=7.9%, 32=0.0%, >=64=0.0% 00:47:05.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.597 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.597 issued rwts: total=7114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.597 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.597 filename1: (groupid=0, jobs=1): err= 0: pid=3059589: Wed Nov 20 18:14:03 2024 00:47:05.597 read: IOPS=688, BW=2752KiB/s (2819kB/s)(26.9MiB/10004msec) 00:47:05.597 slat (usec): min=5, max=132, avg=30.60, stdev=21.31 00:47:05.597 clat (usec): min=7495, max=56858, avg=22952.03, stdev=2622.19 00:47:05.597 lat (usec): min=7530, max=56877, avg=22982.63, stdev=2623.41 00:47:05.597 clat percentiles (usec): 00:47:05.597 | 1.00th=[13304], 5.00th=[20841], 10.00th=[22414], 20.00th=[22414], 00:47:05.597 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:47:05.597 | 70.00th=[23200], 80.00th=[23462], 90.00th=[23987], 95.00th=[24773], 00:47:05.597 | 99.00th=[32637], 99.50th=[36439], 99.90th=[42730], 99.95th=[56886], 00:47:05.597 | 99.99th=[56886] 00:47:05.597 bw ( KiB/s): min= 2436, max= 2912, per=4.11%, avg=2737.05, stdev=106.10, samples=19 00:47:05.597 iops : min= 609, max= 728, avg=684.26, stdev=26.52, samples=19 00:47:05.597 lat (msec) : 10=0.49%, 20=3.85%, 50=95.58%, 100=0.07% 00:47:05.597 cpu : usr=98.95%, sys=0.72%, ctx=28, majf=0, minf=27 00:47:05.597 IO depths : 1=5.2%, 2=10.6%, 4=22.4%, 8=54.2%, 16=7.6%, 32=0.0%, >=64=0.0% 00:47:05.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.597 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.597 issued rwts: total=6884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.597 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.597 filename1: (groupid=0, jobs=1): err= 0: pid=3059590: Wed Nov 20 18:14:03 2024 00:47:05.597 read: IOPS=689, BW=2759KiB/s (2825kB/s)(27.0MiB/10015msec) 00:47:05.597 slat (usec): min=5, max=123, avg=28.90, stdev=22.66 00:47:05.597 clat (usec): min=10072, max=41250, avg=22967.51, stdev=2142.64 00:47:05.597 lat (usec): min=10080, max=41258, avg=22996.41, stdev=2142.93 00:47:05.597 clat percentiles (usec): 00:47:05.597 | 1.00th=[14484], 5.00th=[19530], 10.00th=[22414], 20.00th=[22676], 00:47:05.597 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:47:05.597 | 70.00th=[23462], 80.00th=[23725], 90.00th=[23987], 95.00th=[24511], 00:47:05.597 | 99.00th=[30802], 99.50th=[34866], 99.90th=[39584], 99.95th=[41157], 00:47:05.597 | 99.99th=[41157] 00:47:05.597 bw ( KiB/s): min= 2688, max= 2853, per=4.14%, avg=2757.05, stdev=64.82, samples=20 00:47:05.597 iops : min= 672, max= 713, avg=689.25, stdev=16.19, samples=20 00:47:05.597 lat (msec) : 20=5.24%, 50=94.76% 00:47:05.597 cpu : usr=98.70%, sys=0.80%, ctx=137, majf=0, minf=27 00:47:05.597 IO depths : 1=1.4%, 2=6.7%, 4=22.4%, 8=58.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:47:05.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.597 complete : 0=0.0%, 4=93.7%, 8=0.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.597 issued rwts: total=6908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.597 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.597 filename1: (groupid=0, jobs=1): err= 0: pid=3059591: Wed Nov 20 18:14:03 2024 00:47:05.597 read: IOPS=686, BW=2748KiB/s (2814kB/s)(26.9MiB/10015msec) 00:47:05.597 slat (usec): min=5, max=125, avg=20.00, stdev=15.45 00:47:05.597 clat (usec): min=11210, max=41268, avg=23120.39, stdev=1993.85 00:47:05.597 lat (usec): min=11218, max=41274, avg=23140.39, stdev=1994.94 00:47:05.597 clat percentiles (usec): 00:47:05.597 | 1.00th=[14877], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:47:05.597 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:47:05.597 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[24773], 00:47:05.597 | 99.00th=[29492], 99.50th=[34341], 99.90th=[38536], 99.95th=[41157], 00:47:05.597 | 99.99th=[41157] 00:47:05.597 bw ( KiB/s): min= 2688, max= 2885, per=4.12%, avg=2745.85, stdev=66.49, samples=20 00:47:05.597 iops : min= 672, max= 721, avg=686.45, stdev=16.60, samples=20 00:47:05.597 lat (msec) : 20=3.74%, 50=96.26% 00:47:05.597 cpu : usr=98.80%, sys=0.88%, ctx=13, majf=0, minf=26 00:47:05.597 IO depths : 1=5.8%, 2=11.5%, 4=23.4%, 8=52.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:47:05.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.597 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.597 issued rwts: total=6880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.597 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.597 filename1: (groupid=0, jobs=1): err= 0: pid=3059592: Wed Nov 20 18:14:03 2024 00:47:05.597 read: IOPS=688, BW=2753KiB/s (2819kB/s)(26.9MiB/10004msec) 00:47:05.597 slat (usec): min=5, max=131, avg=30.41, stdev=21.34 00:47:05.597 clat (usec): min=11592, max=45962, avg=22967.72, stdev=2819.67 00:47:05.597 lat (usec): min=11611, max=45972, avg=22998.13, stdev=2821.85 00:47:05.597 clat percentiles (usec): 00:47:05.597 | 1.00th=[13960], 5.00th=[17957], 10.00th=[22152], 20.00th=[22414], 00:47:05.597 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:47:05.597 | 70.00th=[23200], 80.00th=[23462], 90.00th=[23987], 95.00th=[25297], 00:47:05.597 | 99.00th=[34341], 99.50th=[35390], 99.90th=[41157], 99.95th=[45876], 00:47:05.597 | 99.99th=[45876] 00:47:05.597 bw ( KiB/s): min= 2560, max= 2912, per=4.12%, avg=2744.95, stdev=85.22, samples=19 00:47:05.597 iops : min= 640, max= 728, avg=686.21, stdev=21.32, samples=19 00:47:05.597 lat (msec) : 20=7.12%, 50=92.88% 00:47:05.597 cpu : usr=98.79%, sys=0.85%, ctx=78, majf=0, minf=28 00:47:05.597 IO depths : 1=4.6%, 2=9.4%, 4=20.1%, 8=57.7%, 16=8.2%, 32=0.0%, >=64=0.0% 00:47:05.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.597 complete : 0=0.0%, 4=92.8%, 8=1.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.597 issued rwts: total=6886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.597 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.597 filename1: (groupid=0, jobs=1): err= 0: pid=3059593: Wed Nov 20 18:14:03 2024 00:47:05.597 read: IOPS=687, BW=2750KiB/s (2816kB/s)(26.9MiB/10003msec) 00:47:05.598 slat (usec): min=5, max=107, avg=24.54, stdev=16.43 00:47:05.598 clat (usec): min=10541, max=34788, avg=23082.09, stdev=1589.85 00:47:05.598 lat (usec): min=10571, max=34798, avg=23106.64, stdev=1590.19 00:47:05.598 clat percentiles (usec): 00:47:05.598 | 1.00th=[15008], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:47:05.598 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:47:05.598 | 70.00th=[23462], 80.00th=[23725], 90.00th=[23987], 95.00th=[24773], 00:47:05.598 | 99.00th=[27132], 99.50th=[29754], 99.90th=[32113], 99.95th=[34866], 00:47:05.598 | 99.99th=[34866] 00:47:05.598 bw ( KiB/s): min= 2688, max= 2944, per=4.12%, avg=2744.42, stdev=76.62, samples=19 00:47:05.598 iops : min= 672, max= 736, avg=686.11, stdev=19.15, samples=19 00:47:05.598 lat (msec) : 20=2.89%, 50=97.11% 00:47:05.598 cpu : usr=98.67%, sys=1.00%, ctx=15, majf=0, minf=29 00:47:05.598 IO depths : 1=5.5%, 2=11.0%, 4=23.0%, 8=53.5%, 16=7.1%, 32=0.0%, >=64=0.0% 00:47:05.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.598 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.598 issued rwts: total=6876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.598 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.598 filename1: (groupid=0, jobs=1): err= 0: pid=3059594: Wed Nov 20 18:14:03 2024 00:47:05.598 read: IOPS=689, BW=2758KiB/s (2824kB/s)(26.9MiB/10003msec) 00:47:05.598 slat (usec): min=5, max=112, avg=11.38, stdev=10.15 00:47:05.598 clat (usec): min=3982, max=42323, avg=23162.87, stdev=2666.82 00:47:05.598 lat (usec): min=3988, max=42343, avg=23174.25, stdev=2667.67 00:47:05.598 clat percentiles (usec): 00:47:05.598 | 1.00th=[12518], 5.00th=[19792], 10.00th=[22414], 20.00th=[22676], 00:47:05.598 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:47:05.598 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25035], 00:47:05.598 | 99.00th=[31589], 99.50th=[35914], 99.90th=[42206], 99.95th=[42206], 00:47:05.598 | 99.99th=[42206] 00:47:05.598 bw ( KiB/s): min= 2400, max= 2864, per=4.12%, avg=2741.89, stdev=93.02, samples=19 00:47:05.598 iops : min= 600, max= 716, avg=685.47, stdev=23.26, samples=19 00:47:05.598 lat (msec) : 4=0.04%, 10=0.48%, 20=4.52%, 50=94.95% 00:47:05.598 cpu : usr=98.92%, sys=0.76%, ctx=14, majf=0, minf=57 00:47:05.598 IO depths : 1=0.1%, 2=0.1%, 4=1.4%, 8=80.9%, 16=17.5%, 32=0.0%, >=64=0.0% 00:47:05.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.598 complete : 0=0.0%, 4=89.4%, 8=9.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.598 issued rwts: total=6896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.598 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.598 filename2: (groupid=0, jobs=1): err= 0: pid=3059595: Wed Nov 20 18:14:03 2024 00:47:05.598 read: IOPS=694, BW=2777KiB/s (2843kB/s)(27.1MiB/10003msec) 00:47:05.598 slat (usec): min=5, max=101, avg=22.55, stdev=15.87 00:47:05.598 clat (usec): min=6357, max=55859, avg=22869.20, stdev=2955.41 00:47:05.598 lat (usec): min=6363, max=55882, avg=22891.75, stdev=2956.54 00:47:05.598 clat percentiles (usec): 00:47:05.598 | 1.00th=[12387], 5.00th=[17433], 10.00th=[21890], 20.00th=[22676], 00:47:05.598 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:47:05.598 | 70.00th=[23462], 80.00th=[23725], 90.00th=[23987], 95.00th=[25035], 00:47:05.598 | 99.00th=[32113], 99.50th=[35914], 99.90th=[42206], 99.95th=[55837], 00:47:05.598 | 99.99th=[55837] 00:47:05.598 bw ( KiB/s): min= 2672, max= 2928, per=4.15%, avg=2762.11, stdev=74.13, samples=19 00:47:05.598 iops : min= 668, max= 732, avg=690.53, stdev=18.53, samples=19 00:47:05.598 lat (msec) : 10=0.60%, 20=6.13%, 50=93.19%, 100=0.07% 00:47:05.598 cpu : usr=98.97%, sys=0.70%, ctx=15, majf=0, minf=47 00:47:05.598 IO depths : 1=4.1%, 2=8.9%, 4=20.1%, 8=58.1%, 16=8.8%, 32=0.0%, >=64=0.0% 00:47:05.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.598 complete : 0=0.0%, 4=92.8%, 8=1.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.598 issued rwts: total=6944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.598 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.598 filename2: (groupid=0, jobs=1): err= 0: pid=3059596: Wed Nov 20 18:14:03 2024 00:47:05.598 read: IOPS=690, BW=2761KiB/s (2828kB/s)(27.0MiB/10015msec) 00:47:05.598 slat (usec): min=5, max=112, avg=18.16, stdev=15.40 00:47:05.598 clat (usec): min=11566, max=35756, avg=23012.91, stdev=1722.26 00:47:05.598 lat (usec): min=11580, max=35785, avg=23031.07, stdev=1723.45 00:47:05.598 clat percentiles (usec): 00:47:05.598 | 1.00th=[15139], 5.00th=[21890], 10.00th=[22414], 20.00th=[22676], 00:47:05.598 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:47:05.598 | 70.00th=[23462], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:47:05.598 | 99.00th=[25297], 99.50th=[30278], 99.90th=[34866], 99.95th=[35914], 00:47:05.598 | 99.99th=[35914] 00:47:05.598 bw ( KiB/s): min= 2688, max= 2965, per=4.14%, avg=2759.45, stdev=80.16, samples=20 00:47:05.598 iops : min= 672, max= 741, avg=689.85, stdev=20.01, samples=20 00:47:05.598 lat (msec) : 20=3.70%, 50=96.30% 00:47:05.598 cpu : usr=98.58%, sys=0.91%, ctx=126, majf=0, minf=25 00:47:05.598 IO depths : 1=5.9%, 2=11.8%, 4=24.0%, 8=51.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:47:05.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.598 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.598 issued rwts: total=6914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.598 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.598 filename2: (groupid=0, jobs=1): err= 0: pid=3059597: Wed Nov 20 18:14:03 2024 00:47:05.598 read: IOPS=679, BW=2719KiB/s (2784kB/s)(26.6MiB/10004msec) 00:47:05.598 slat (usec): min=5, max=114, avg=19.20, stdev=16.95 00:47:05.598 clat (usec): min=7383, max=49439, avg=23405.68, stdev=3531.13 00:47:05.598 lat (usec): min=7415, max=49460, avg=23424.88, stdev=3531.67 00:47:05.598 clat percentiles (usec): 00:47:05.598 | 1.00th=[12780], 5.00th=[17957], 10.00th=[21365], 20.00th=[22676], 00:47:05.598 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:47:05.598 | 70.00th=[23462], 80.00th=[23987], 90.00th=[25560], 95.00th=[29230], 00:47:05.598 | 99.00th=[36439], 99.50th=[40109], 99.90th=[49546], 99.95th=[49546], 00:47:05.598 | 99.99th=[49546] 00:47:05.598 bw ( KiB/s): min= 2388, max= 2928, per=4.07%, avg=2708.42, stdev=111.47, samples=19 00:47:05.598 iops : min= 597, max= 732, avg=677.11, stdev=27.87, samples=19 00:47:05.598 lat (msec) : 10=0.26%, 20=8.16%, 50=91.57% 00:47:05.598 cpu : usr=98.89%, sys=0.78%, ctx=11, majf=0, minf=32 00:47:05.598 IO depths : 1=1.8%, 2=3.6%, 4=9.6%, 8=71.8%, 16=13.2%, 32=0.0%, >=64=0.0% 00:47:05.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.598 complete : 0=0.0%, 4=90.4%, 8=6.3%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.598 issued rwts: total=6800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.598 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.598 filename2: (groupid=0, jobs=1): err= 0: pid=3059598: Wed Nov 20 18:14:03 2024 00:47:05.598 read: IOPS=688, BW=2753KiB/s (2820kB/s)(26.9MiB/10018msec) 00:47:05.598 slat (usec): min=5, max=136, avg=23.54, stdev=18.76 00:47:05.598 clat (usec): min=10348, max=36040, avg=23061.75, stdev=1697.74 00:47:05.598 lat (usec): min=10361, max=36048, avg=23085.29, stdev=1697.27 00:47:05.598 clat percentiles (usec): 00:47:05.598 | 1.00th=[14222], 5.00th=[22152], 10.00th=[22414], 20.00th=[22676], 00:47:05.598 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:47:05.598 | 70.00th=[23462], 80.00th=[23725], 90.00th=[23987], 95.00th=[24511], 00:47:05.598 | 99.00th=[27395], 99.50th=[28443], 99.90th=[35914], 99.95th=[35914], 00:47:05.598 | 99.99th=[35914] 00:47:05.598 bw ( KiB/s): min= 2640, max= 2992, per=4.13%, avg=2752.00, stdev=87.02, samples=20 00:47:05.598 iops : min= 660, max= 748, avg=688.00, stdev=21.75, samples=20 00:47:05.598 lat (msec) : 20=3.18%, 50=96.82% 00:47:05.598 cpu : usr=98.81%, sys=0.86%, ctx=15, majf=0, minf=27 00:47:05.598 IO depths : 1=5.6%, 2=11.2%, 4=23.2%, 8=53.2%, 16=6.9%, 32=0.0%, >=64=0.0% 00:47:05.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.598 complete : 0=0.0%, 4=93.6%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.598 issued rwts: total=6896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.598 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.598 filename2: (groupid=0, jobs=1): err= 0: pid=3059599: Wed Nov 20 18:14:03 2024 00:47:05.598 read: IOPS=711, BW=2846KiB/s (2915kB/s)(27.9MiB/10020msec) 00:47:05.598 slat (nsec): min=5411, max=87163, avg=9483.46, stdev=7624.14 00:47:05.598 clat (usec): min=2871, max=41340, avg=22412.11, stdev=3427.21 00:47:05.598 lat (usec): min=2880, max=41346, avg=22421.60, stdev=3427.74 00:47:05.598 clat percentiles (usec): 00:47:05.598 | 1.00th=[ 9503], 5.00th=[15008], 10.00th=[17957], 20.00th=[22414], 00:47:05.598 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:47:05.598 | 70.00th=[23462], 80.00th=[23725], 90.00th=[23987], 95.00th=[24773], 00:47:05.598 | 99.00th=[32900], 99.50th=[35914], 99.90th=[40109], 99.95th=[41157], 00:47:05.598 | 99.99th=[41157] 00:47:05.598 bw ( KiB/s): min= 2688, max= 3168, per=4.27%, avg=2845.60, stdev=131.35, samples=20 00:47:05.598 iops : min= 672, max= 792, avg=711.40, stdev=32.84, samples=20 00:47:05.598 lat (msec) : 4=0.22%, 10=1.21%, 20=11.29%, 50=87.28% 00:47:05.598 cpu : usr=98.89%, sys=0.78%, ctx=13, majf=0, minf=35 00:47:05.598 IO depths : 1=4.8%, 2=9.6%, 4=20.6%, 8=57.2%, 16=7.8%, 32=0.0%, >=64=0.0% 00:47:05.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.598 complete : 0=0.0%, 4=92.9%, 8=1.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.598 issued rwts: total=7130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.598 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.598 filename2: (groupid=0, jobs=1): err= 0: pid=3059600: Wed Nov 20 18:14:03 2024 00:47:05.598 read: IOPS=702, BW=2811KiB/s (2878kB/s)(27.5MiB/10013msec) 00:47:05.598 slat (usec): min=5, max=120, avg=12.24, stdev=11.03 00:47:05.598 clat (usec): min=6780, max=42582, avg=22694.59, stdev=3399.57 00:47:05.598 lat (usec): min=6786, max=42591, avg=22706.83, stdev=3400.40 00:47:05.598 clat percentiles (usec): 00:47:05.598 | 1.00th=[13042], 5.00th=[15795], 10.00th=[17695], 20.00th=[22414], 00:47:05.598 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 00:47:05.598 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24773], 95.00th=[27132], 00:47:05.598 | 99.00th=[32375], 99.50th=[35914], 99.90th=[40633], 99.95th=[42730], 00:47:05.599 | 99.99th=[42730] 00:47:05.599 bw ( KiB/s): min= 2688, max= 3008, per=4.21%, avg=2808.00, stdev=86.01, samples=20 00:47:05.599 iops : min= 672, max= 752, avg=702.00, stdev=21.50, samples=20 00:47:05.599 lat (msec) : 10=0.16%, 20=13.32%, 50=86.53% 00:47:05.599 cpu : usr=99.02%, sys=0.65%, ctx=14, majf=0, minf=21 00:47:05.599 IO depths : 1=1.0%, 2=1.9%, 4=6.7%, 8=76.4%, 16=14.0%, 32=0.0%, >=64=0.0% 00:47:05.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.599 complete : 0=0.0%, 4=89.8%, 8=7.0%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.599 issued rwts: total=7036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.599 filename2: (groupid=0, jobs=1): err= 0: pid=3059601: Wed Nov 20 18:14:03 2024 00:47:05.599 read: IOPS=690, BW=2763KiB/s (2829kB/s)(27.0MiB/10003msec) 00:47:05.599 slat (usec): min=5, max=136, avg=30.31, stdev=18.30 00:47:05.599 clat (usec): min=7409, max=42300, avg=22912.26, stdev=2500.77 00:47:05.599 lat (usec): min=7419, max=42321, avg=22942.56, stdev=2502.54 00:47:05.599 clat percentiles (usec): 00:47:05.599 | 1.00th=[12256], 5.00th=[20579], 10.00th=[22414], 20.00th=[22676], 00:47:05.599 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:47:05.599 | 70.00th=[23462], 80.00th=[23462], 90.00th=[23987], 95.00th=[24511], 00:47:05.599 | 99.00th=[32113], 99.50th=[34866], 99.90th=[42206], 99.95th=[42206], 00:47:05.599 | 99.99th=[42206] 00:47:05.599 bw ( KiB/s): min= 2432, max= 2944, per=4.12%, avg=2747.79, stdev=107.84, samples=19 00:47:05.599 iops : min= 608, max= 736, avg=686.95, stdev=26.96, samples=19 00:47:05.599 lat (msec) : 10=0.46%, 20=4.01%, 50=95.53% 00:47:05.599 cpu : usr=98.94%, sys=0.73%, ctx=14, majf=0, minf=28 00:47:05.599 IO depths : 1=5.0%, 2=10.5%, 4=22.6%, 8=54.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:47:05.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.599 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.599 issued rwts: total=6910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.599 filename2: (groupid=0, jobs=1): err= 0: pid=3059602: Wed Nov 20 18:14:03 2024 00:47:05.599 read: IOPS=697, BW=2792KiB/s (2859kB/s)(27.3MiB/10015msec) 00:47:05.599 slat (usec): min=5, max=141, avg=28.05, stdev=18.63 00:47:05.599 clat (usec): min=10695, max=41399, avg=22688.26, stdev=2573.27 00:47:05.599 lat (usec): min=10704, max=41410, avg=22716.31, stdev=2576.04 00:47:05.599 clat percentiles (usec): 00:47:05.599 | 1.00th=[13566], 5.00th=[16450], 10.00th=[21365], 20.00th=[22414], 00:47:05.599 | 30.00th=[22676], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:47:05.599 | 70.00th=[23200], 80.00th=[23462], 90.00th=[23987], 95.00th=[24773], 00:47:05.599 | 99.00th=[30802], 99.50th=[31851], 99.90th=[40109], 99.95th=[41157], 00:47:05.599 | 99.99th=[41157] 00:47:05.599 bw ( KiB/s): min= 2688, max= 3056, per=4.19%, avg=2789.60, stdev=104.38, samples=20 00:47:05.599 iops : min= 672, max= 764, avg=697.40, stdev=26.09, samples=20 00:47:05.599 lat (msec) : 20=8.76%, 50=91.24% 00:47:05.599 cpu : usr=99.23%, sys=0.47%, ctx=13, majf=0, minf=30 00:47:05.599 IO depths : 1=4.7%, 2=9.5%, 4=20.3%, 8=57.3%, 16=8.2%, 32=0.0%, >=64=0.0% 00:47:05.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.599 complete : 0=0.0%, 4=92.7%, 8=1.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:05.599 issued rwts: total=6990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:05.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:05.599 00:47:05.599 Run status group 0 (all jobs): 00:47:05.599 READ: bw=65.0MiB/s (68.2MB/s), 2719KiB/s-2869KiB/s (2784kB/s-2938kB/s), io=652MiB (683MB), run=10002-10021msec 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:05.599 bdev_null0 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:05.599 [2024-11-20 18:14:04.144145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:05.599 bdev_null1 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:47:05.599 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:47:05.600 { 00:47:05.600 "params": { 00:47:05.600 "name": "Nvme$subsystem", 00:47:05.600 "trtype": "$TEST_TRANSPORT", 00:47:05.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:47:05.600 "adrfam": "ipv4", 00:47:05.600 "trsvcid": "$NVMF_PORT", 00:47:05.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:47:05.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:47:05.600 "hdgst": ${hdgst:-false}, 00:47:05.600 "ddgst": ${ddgst:-false} 00:47:05.600 }, 00:47:05.600 "method": "bdev_nvme_attach_controller" 00:47:05.600 } 00:47:05.600 EOF 00:47:05.600 )") 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:47:05.600 { 00:47:05.600 "params": { 00:47:05.600 "name": "Nvme$subsystem", 00:47:05.600 "trtype": "$TEST_TRANSPORT", 00:47:05.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:47:05.600 "adrfam": "ipv4", 00:47:05.600 "trsvcid": "$NVMF_PORT", 00:47:05.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:47:05.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:47:05.600 "hdgst": ${hdgst:-false}, 00:47:05.600 "ddgst": ${ddgst:-false} 00:47:05.600 }, 00:47:05.600 "method": "bdev_nvme_attach_controller" 00:47:05.600 } 00:47:05.600 EOF 00:47:05.600 )") 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:47:05.600 "params": { 00:47:05.600 "name": "Nvme0", 00:47:05.600 "trtype": "tcp", 00:47:05.600 "traddr": "10.0.0.2", 00:47:05.600 "adrfam": "ipv4", 00:47:05.600 "trsvcid": "4420", 00:47:05.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:05.600 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:05.600 "hdgst": false, 00:47:05.600 "ddgst": false 00:47:05.600 }, 00:47:05.600 "method": "bdev_nvme_attach_controller" 00:47:05.600 },{ 00:47:05.600 "params": { 00:47:05.600 "name": "Nvme1", 00:47:05.600 "trtype": "tcp", 00:47:05.600 "traddr": "10.0.0.2", 00:47:05.600 "adrfam": "ipv4", 00:47:05.600 "trsvcid": "4420", 00:47:05.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:47:05.600 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:47:05.600 "hdgst": false, 00:47:05.600 "ddgst": false 00:47:05.600 }, 00:47:05.600 "method": "bdev_nvme_attach_controller" 00:47:05.600 }' 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:47:05.600 18:14:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:05.600 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:47:05.600 ... 00:47:05.600 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:47:05.600 ... 00:47:05.600 fio-3.35 00:47:05.600 Starting 4 threads 00:47:10.870 00:47:10.870 filename0: (groupid=0, jobs=1): err= 0: pid=3061766: Wed Nov 20 18:14:10 2024 00:47:10.870 read: IOPS=2981, BW=23.3MiB/s (24.4MB/s)(117MiB/5003msec) 00:47:10.870 slat (nsec): min=5402, max=57416, avg=7518.59, stdev=3061.67 00:47:10.870 clat (usec): min=859, max=4852, avg=2663.81, stdev=238.62 00:47:10.870 lat (usec): min=875, max=4858, avg=2671.32, stdev=238.38 00:47:10.870 clat percentiles (usec): 00:47:10.870 | 1.00th=[ 1942], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2606], 00:47:10.870 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:47:10.870 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2835], 95.00th=[ 2933], 00:47:10.870 | 99.00th=[ 3556], 99.50th=[ 3851], 99.90th=[ 4424], 99.95th=[ 4686], 00:47:10.870 | 99.99th=[ 4817] 00:47:10.870 bw ( KiB/s): min=23680, max=24224, per=25.15%, avg=23857.60, stdev=143.20, samples=10 00:47:10.870 iops : min= 2960, max= 3028, avg=2982.20, stdev=17.90, samples=10 00:47:10.870 lat (usec) : 1000=0.10% 00:47:10.870 lat (msec) : 2=1.06%, 4=98.57%, 10=0.27% 00:47:10.870 cpu : usr=95.82%, sys=3.92%, ctx=7, majf=0, minf=39 00:47:10.870 IO depths : 1=0.1%, 2=0.1%, 4=69.7%, 8=30.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:10.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:10.870 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:10.870 issued rwts: total=14916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:10.870 latency : target=0, window=0, percentile=100.00%, depth=8 00:47:10.870 filename0: (groupid=0, jobs=1): err= 0: pid=3061767: Wed Nov 20 18:14:10 2024 00:47:10.870 read: IOPS=2961, BW=23.1MiB/s (24.3MB/s)(116MiB/5002msec) 00:47:10.870 slat (nsec): min=5398, max=55367, avg=7450.43, stdev=2753.99 00:47:10.870 clat (usec): min=1150, max=4967, avg=2680.99, stdev=249.18 00:47:10.870 lat (usec): min=1158, max=4975, avg=2688.44, stdev=249.27 00:47:10.870 clat percentiles (usec): 00:47:10.870 | 1.00th=[ 2008], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2606], 00:47:10.871 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:47:10.871 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2868], 95.00th=[ 3032], 00:47:10.871 | 99.00th=[ 3818], 99.50th=[ 3982], 99.90th=[ 4293], 99.95th=[ 4490], 00:47:10.871 | 99.99th=[ 4948] 00:47:10.871 bw ( KiB/s): min=23326, max=23920, per=24.99%, avg=23697.56, stdev=195.03, samples=9 00:47:10.871 iops : min= 2915, max= 2990, avg=2962.11, stdev=24.56, samples=9 00:47:10.871 lat (msec) : 2=0.89%, 4=98.62%, 10=0.49% 00:47:10.871 cpu : usr=96.12%, sys=3.60%, ctx=8, majf=0, minf=35 00:47:10.871 IO depths : 1=0.1%, 2=0.2%, 4=71.5%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:10.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:10.871 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:10.871 issued rwts: total=14815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:10.871 latency : target=0, window=0, percentile=100.00%, depth=8 00:47:10.871 filename1: (groupid=0, jobs=1): err= 0: pid=3061768: Wed Nov 20 18:14:10 2024 00:47:10.871 read: IOPS=2972, BW=23.2MiB/s (24.3MB/s)(116MiB/5001msec) 00:47:10.871 slat (nsec): min=5403, max=67423, avg=7561.46, stdev=3181.64 00:47:10.871 clat (usec): min=1122, max=5015, avg=2671.51, stdev=231.98 00:47:10.871 lat (usec): min=1130, max=5024, avg=2679.07, stdev=232.12 00:47:10.871 clat percentiles (usec): 00:47:10.871 | 1.00th=[ 1991], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2606], 00:47:10.871 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:47:10.871 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2868], 95.00th=[ 2966], 00:47:10.871 | 99.00th=[ 3556], 99.50th=[ 3818], 99.90th=[ 4359], 99.95th=[ 4490], 00:47:10.871 | 99.99th=[ 5014] 00:47:10.871 bw ( KiB/s): min=23342, max=23888, per=25.06%, avg=23765.11, stdev=177.32, samples=9 00:47:10.871 iops : min= 2917, max= 2986, avg=2970.56, stdev=22.39, samples=9 00:47:10.871 lat (msec) : 2=1.11%, 4=98.63%, 10=0.26% 00:47:10.871 cpu : usr=96.08%, sys=3.64%, ctx=7, majf=0, minf=83 00:47:10.871 IO depths : 1=0.1%, 2=0.2%, 4=71.1%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:10.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:10.871 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:10.871 issued rwts: total=14865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:10.871 latency : target=0, window=0, percentile=100.00%, depth=8 00:47:10.871 filename1: (groupid=0, jobs=1): err= 0: pid=3061769: Wed Nov 20 18:14:10 2024 00:47:10.871 read: IOPS=2942, BW=23.0MiB/s (24.1MB/s)(115MiB/5002msec) 00:47:10.871 slat (nsec): min=5407, max=59244, avg=7367.06, stdev=2819.92 00:47:10.871 clat (usec): min=1175, max=4954, avg=2699.23, stdev=289.14 00:47:10.871 lat (usec): min=1181, max=4983, avg=2706.59, stdev=289.13 00:47:10.871 clat percentiles (usec): 00:47:10.871 | 1.00th=[ 2024], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2606], 00:47:10.871 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:47:10.871 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 3195], 00:47:10.871 | 99.00th=[ 3982], 99.50th=[ 4047], 99.90th=[ 4555], 99.95th=[ 4883], 00:47:10.871 | 99.99th=[ 4948] 00:47:10.871 bw ( KiB/s): min=23392, max=23760, per=24.83%, avg=23553.78, stdev=142.54, samples=9 00:47:10.871 iops : min= 2924, max= 2970, avg=2944.22, stdev=17.82, samples=9 00:47:10.871 lat (msec) : 2=0.84%, 4=98.45%, 10=0.71% 00:47:10.871 cpu : usr=96.08%, sys=3.66%, ctx=7, majf=0, minf=36 00:47:10.871 IO depths : 1=0.1%, 2=0.5%, 4=72.1%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:10.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:10.871 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:10.871 issued rwts: total=14716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:10.871 latency : target=0, window=0, percentile=100.00%, depth=8 00:47:10.871 00:47:10.871 Run status group 0 (all jobs): 00:47:10.871 READ: bw=92.6MiB/s (97.1MB/s), 23.0MiB/s-23.3MiB/s (24.1MB/s-24.4MB/s), io=463MiB (486MB), run=5001-5003msec 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:10.871 00:47:10.871 real 0m24.293s 00:47:10.871 user 5m17.794s 00:47:10.871 sys 0m4.524s 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:10.871 18:14:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:10.871 ************************************ 00:47:10.871 END TEST fio_dif_rand_params 00:47:10.871 ************************************ 00:47:10.871 18:14:10 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:47:10.871 18:14:10 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:47:10.871 18:14:10 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:10.871 18:14:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:10.871 ************************************ 00:47:10.871 START TEST fio_dif_digest 00:47:10.871 ************************************ 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:10.871 bdev_null0 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:10.871 [2024-11-20 18:14:10.491556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:47:10.871 18:14:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:47:10.871 { 00:47:10.871 "params": { 00:47:10.872 "name": "Nvme$subsystem", 00:47:10.872 "trtype": "$TEST_TRANSPORT", 00:47:10.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:47:10.872 "adrfam": "ipv4", 00:47:10.872 "trsvcid": "$NVMF_PORT", 00:47:10.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:47:10.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:47:10.872 "hdgst": ${hdgst:-false}, 00:47:10.872 "ddgst": ${ddgst:-false} 00:47:10.872 }, 00:47:10.872 "method": "bdev_nvme_attach_controller" 00:47:10.872 } 00:47:10.872 EOF 00:47:10.872 )") 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:47:10.872 "params": { 00:47:10.872 "name": "Nvme0", 00:47:10.872 "trtype": "tcp", 00:47:10.872 "traddr": "10.0.0.2", 00:47:10.872 "adrfam": "ipv4", 00:47:10.872 "trsvcid": "4420", 00:47:10.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:10.872 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:10.872 "hdgst": true, 00:47:10.872 "ddgst": true 00:47:10.872 }, 00:47:10.872 "method": "bdev_nvme_attach_controller" 00:47:10.872 }' 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:47:10.872 18:14:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:11.132 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:47:11.132 ... 00:47:11.132 fio-3.35 00:47:11.132 Starting 3 threads 00:47:23.352 00:47:23.352 filename0: (groupid=0, jobs=1): err= 0: pid=3063245: Wed Nov 20 18:14:21 2024 00:47:23.352 read: IOPS=308, BW=38.5MiB/s (40.4MB/s)(387MiB/10046msec) 00:47:23.352 slat (nsec): min=5653, max=36596, avg=7286.72, stdev=1539.32 00:47:23.352 clat (usec): min=6742, max=49924, avg=9705.44, stdev=1281.77 00:47:23.352 lat (usec): min=6749, max=49931, avg=9712.73, stdev=1281.77 00:47:23.352 clat percentiles (usec): 00:47:23.352 | 1.00th=[ 7767], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8979], 00:47:23.352 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:47:23.352 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[11076], 00:47:23.352 | 99.00th=[11731], 99.50th=[11994], 99.90th=[12518], 99.95th=[46924], 00:47:23.352 | 99.99th=[50070] 00:47:23.352 bw ( KiB/s): min=38656, max=40704, per=34.14%, avg=39635.75, stdev=522.20, samples=20 00:47:23.352 iops : min= 302, max= 318, avg=309.50, stdev= 4.10, samples=20 00:47:23.352 lat (msec) : 10=66.11%, 20=33.83%, 50=0.06% 00:47:23.352 cpu : usr=93.57%, sys=6.18%, ctx=18, majf=0, minf=94 00:47:23.352 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:23.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:23.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:23.352 issued rwts: total=3098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:23.352 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:23.352 filename0: (groupid=0, jobs=1): err= 0: pid=3063246: Wed Nov 20 18:14:21 2024 00:47:23.352 read: IOPS=301, BW=37.6MiB/s (39.5MB/s)(378MiB/10046msec) 00:47:23.352 slat (nsec): min=5629, max=36461, avg=6892.96, stdev=1198.36 00:47:23.352 clat (usec): min=6757, max=50848, avg=9940.59, stdev=1352.38 00:47:23.352 lat (usec): min=6764, max=50855, avg=9947.48, stdev=1352.40 00:47:23.352 clat percentiles (usec): 00:47:23.352 | 1.00th=[ 7898], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9241], 00:47:23.352 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:47:23.352 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11076], 95.00th=[11469], 00:47:23.352 | 99.00th=[11994], 99.50th=[12256], 99.90th=[12911], 99.95th=[49546], 00:47:23.352 | 99.99th=[50594] 00:47:23.352 bw ( KiB/s): min=37632, max=39680, per=33.33%, avg=38694.40, stdev=617.51, samples=20 00:47:23.352 iops : min= 294, max= 310, avg=302.30, stdev= 4.82, samples=20 00:47:23.353 lat (msec) : 10=55.24%, 20=44.69%, 50=0.03%, 100=0.03% 00:47:23.353 cpu : usr=93.83%, sys=5.92%, ctx=20, majf=0, minf=112 00:47:23.353 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:23.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:23.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:23.353 issued rwts: total=3025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:23.353 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:23.353 filename0: (groupid=0, jobs=1): err= 0: pid=3063247: Wed Nov 20 18:14:21 2024 00:47:23.353 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(374MiB/10043msec) 00:47:23.353 slat (nsec): min=5808, max=68767, avg=7225.65, stdev=1810.67 00:47:23.353 clat (usec): min=6868, max=46389, avg=10046.82, stdev=1098.12 00:47:23.353 lat (usec): min=6874, max=46396, avg=10054.05, stdev=1098.13 00:47:23.353 clat percentiles (usec): 00:47:23.353 | 1.00th=[ 7832], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9372], 00:47:23.353 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:47:23.353 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11207], 95.00th=[11469], 00:47:23.353 | 99.00th=[12125], 99.50th=[12387], 99.90th=[13173], 99.95th=[13304], 00:47:23.353 | 99.99th=[46400] 00:47:23.353 bw ( KiB/s): min=37120, max=40704, per=32.93%, avg=38233.60, stdev=962.44, samples=20 00:47:23.353 iops : min= 290, max= 318, avg=298.70, stdev= 7.52, samples=20 00:47:23.353 lat (msec) : 10=48.46%, 20=51.51%, 50=0.03% 00:47:23.353 cpu : usr=93.74%, sys=6.01%, ctx=21, majf=0, minf=237 00:47:23.353 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:23.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:23.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:23.353 issued rwts: total=2988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:23.353 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:23.353 00:47:23.353 Run status group 0 (all jobs): 00:47:23.353 READ: bw=113MiB/s (119MB/s), 37.2MiB/s-38.5MiB/s (39.0MB/s-40.4MB/s), io=1139MiB (1194MB), run=10043-10046msec 00:47:23.353 18:14:21 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:47:23.353 18:14:21 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:47:23.353 18:14:21 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:47:23.353 18:14:21 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:47:23.353 18:14:21 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:47:23.353 18:14:21 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:47:23.353 18:14:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:23.353 18:14:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:23.353 18:14:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:23.353 18:14:21 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:47:23.353 18:14:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:23.353 18:14:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:23.353 18:14:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:23.353 00:47:23.353 real 0m11.208s 00:47:23.353 user 0m44.746s 00:47:23.353 sys 0m2.130s 00:47:23.353 18:14:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:23.353 18:14:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:23.353 ************************************ 00:47:23.353 END TEST fio_dif_digest 00:47:23.353 ************************************ 00:47:23.353 18:14:21 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:47:23.353 18:14:21 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:47:23.353 18:14:21 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:47:23.353 18:14:21 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:47:23.353 18:14:21 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:23.353 18:14:21 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:47:23.353 18:14:21 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:23.353 18:14:21 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:23.353 rmmod nvme_tcp 00:47:23.353 rmmod nvme_fabrics 00:47:23.353 rmmod nvme_keyring 00:47:23.353 18:14:21 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:23.353 18:14:21 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:47:23.353 18:14:21 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:47:23.353 18:14:21 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 3053223 ']' 00:47:23.353 18:14:21 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 3053223 00:47:23.353 18:14:21 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 3053223 ']' 00:47:23.353 18:14:21 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 3053223 00:47:23.353 18:14:21 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:47:23.353 18:14:21 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:23.353 18:14:21 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3053223 00:47:23.353 18:14:21 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:47:23.353 18:14:21 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:47:23.353 18:14:21 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3053223' 00:47:23.353 killing process with pid 3053223 00:47:23.353 18:14:21 nvmf_dif -- common/autotest_common.sh@969 -- # kill 3053223 00:47:23.353 18:14:21 nvmf_dif -- common/autotest_common.sh@974 -- # wait 3053223 00:47:23.353 18:14:21 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:47:23.353 18:14:21 nvmf_dif -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:47:25.892 Waiting for block devices as requested 00:47:25.892 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:25.892 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:25.892 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:25.892 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:25.892 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:25.892 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:26.152 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:26.152 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:26.152 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:47:26.412 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:26.412 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:26.671 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:26.671 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:26.671 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:26.671 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:26.930 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:26.930 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:27.190 18:14:27 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:47:27.190 18:14:27 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:47:27.190 18:14:27 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:47:27.190 18:14:27 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:47:27.190 18:14:27 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:47:27.190 18:14:27 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:47:27.190 18:14:27 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:27.190 18:14:27 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:27.190 18:14:27 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:27.190 18:14:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:27.190 18:14:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:29.730 18:14:29 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:29.730 00:47:29.730 real 1m18.108s 00:47:29.730 user 8m8.617s 00:47:29.730 sys 0m21.909s 00:47:29.730 18:14:29 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:29.730 18:14:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:29.730 ************************************ 00:47:29.730 END TEST nvmf_dif 00:47:29.730 ************************************ 00:47:29.730 18:14:29 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:47:29.730 18:14:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:47:29.730 18:14:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:29.730 18:14:29 -- common/autotest_common.sh@10 -- # set +x 00:47:29.730 ************************************ 00:47:29.730 START TEST nvmf_abort_qd_sizes 00:47:29.730 ************************************ 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:47:29.730 * Looking for test storage... 00:47:29.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:47:29.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:29.730 --rc genhtml_branch_coverage=1 00:47:29.730 --rc genhtml_function_coverage=1 00:47:29.730 --rc genhtml_legend=1 00:47:29.730 --rc geninfo_all_blocks=1 00:47:29.730 --rc geninfo_unexecuted_blocks=1 00:47:29.730 00:47:29.730 ' 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:47:29.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:29.730 --rc genhtml_branch_coverage=1 00:47:29.730 --rc genhtml_function_coverage=1 00:47:29.730 --rc genhtml_legend=1 00:47:29.730 --rc geninfo_all_blocks=1 00:47:29.730 --rc geninfo_unexecuted_blocks=1 00:47:29.730 00:47:29.730 ' 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:47:29.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:29.730 --rc genhtml_branch_coverage=1 00:47:29.730 --rc genhtml_function_coverage=1 00:47:29.730 --rc genhtml_legend=1 00:47:29.730 --rc geninfo_all_blocks=1 00:47:29.730 --rc geninfo_unexecuted_blocks=1 00:47:29.730 00:47:29.730 ' 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:47:29.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:29.730 --rc genhtml_branch_coverage=1 00:47:29.730 --rc genhtml_function_coverage=1 00:47:29.730 --rc genhtml_legend=1 00:47:29.730 --rc geninfo_all_blocks=1 00:47:29.730 --rc geninfo_unexecuted_blocks=1 00:47:29.730 00:47:29.730 ' 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:29.730 18:14:29 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:29.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:47:29.731 18:14:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:47:37.866 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:47:37.866 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:47:37.866 Found net devices under 0000:4b:00.0: cvl_0_0 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:47:37.866 Found net devices under 0000:4b:00.1: cvl_0_1 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # is_hw=yes 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:47:37.866 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:47:37.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:37.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:47:37.867 00:47:37.867 --- 10.0.0.2 ping statistics --- 00:47:37.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:37.867 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:37.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:37.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:47:37.867 00:47:37.867 --- 10.0.0.1 ping statistics --- 00:47:37.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:37.867 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # return 0 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:47:37.867 18:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:47:40.409 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:47:40.409 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:47:40.409 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:47:40.409 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:47:40.409 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:47:40.409 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:47:40.409 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:47:40.409 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:47:40.409 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:47:40.409 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:47:40.409 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:47:40.409 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:47:40.409 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:47:40.409 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:47:40.669 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:47:40.669 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:47:40.669 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=3072442 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 3072442 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 3072442 ']' 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:40.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:40.931 18:14:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:40.931 [2024-11-20 18:14:40.796728] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:47:40.931 [2024-11-20 18:14:40.796792] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:41.197 [2024-11-20 18:14:40.886563] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:41.197 [2024-11-20 18:14:40.936267] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:41.197 [2024-11-20 18:14:40.936322] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:41.197 [2024-11-20 18:14:40.936330] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:41.197 [2024-11-20 18:14:40.936337] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:41.197 [2024-11-20 18:14:40.936343] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:41.197 [2024-11-20 18:14:40.936464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:47:41.197 [2024-11-20 18:14:40.936620] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:47:41.197 [2024-11-20 18:14:40.936754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:47:41.197 [2024-11-20 18:14:40.936756] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:41.846 18:14:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:41.846 ************************************ 00:47:41.846 START TEST spdk_target_abort 00:47:41.846 ************************************ 00:47:41.846 18:14:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:47:41.846 18:14:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:47:41.846 18:14:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:47:41.846 18:14:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:41.846 18:14:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:42.112 spdk_targetn1 00:47:42.112 18:14:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:42.112 18:14:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:42.112 18:14:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:42.112 18:14:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:42.112 [2024-11-20 18:14:41.983082] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:42.112 18:14:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:42.112 18:14:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:47:42.112 18:14:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:42.112 18:14:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:42.112 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:42.112 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:47:42.112 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:42.112 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:42.112 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:42.112 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:47:42.112 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:42.112 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:42.112 [2024-11-20 18:14:42.023378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:42.373 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:42.374 18:14:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:42.634 [2024-11-20 18:14:42.326619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:512 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:47:42.634 [2024-11-20 18:14:42.326651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0043 p:1 m:0 dnr:0 00:47:42.634 [2024-11-20 18:14:42.342746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1032 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:47:42.634 [2024-11-20 18:14:42.342767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0082 p:1 m:0 dnr:0 00:47:42.634 [2024-11-20 18:14:42.359351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1592 len:8 PRP1 0x2000078be000 PRP2 0x0 00:47:42.634 [2024-11-20 18:14:42.359376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00c8 p:1 m:0 dnr:0 00:47:42.634 [2024-11-20 18:14:42.359621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1608 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:47:42.634 [2024-11-20 18:14:42.359634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00ca p:1 m:0 dnr:0 00:47:42.634 [2024-11-20 18:14:42.366687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1784 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:47:42.634 [2024-11-20 18:14:42.366705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00e1 p:1 m:0 dnr:0 00:47:42.634 [2024-11-20 18:14:42.382650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2248 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:47:42.634 [2024-11-20 18:14:42.382670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:47:42.634 [2024-11-20 18:14:42.384224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2352 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:47:42.634 [2024-11-20 18:14:42.384241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:47:42.634 [2024-11-20 18:14:42.406710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3000 len:8 PRP1 0x2000078be000 PRP2 0x0 00:47:42.634 [2024-11-20 18:14:42.406729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:47:42.634 [2024-11-20 18:14:42.415038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3296 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:47:42.634 [2024-11-20 18:14:42.415056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00a0 p:0 m:0 dnr:0 00:47:45.929 Initializing NVMe Controllers 00:47:45.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:47:45.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:45.929 Initialization complete. Launching workers. 00:47:45.929 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11760, failed: 9 00:47:45.929 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2572, failed to submit 9197 00:47:45.929 success 726, unsuccessful 1846, failed 0 00:47:45.929 18:14:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:45.929 18:14:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:45.930 [2024-11-20 18:14:45.501424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:1008 len:8 PRP1 0x200007c56000 PRP2 0x0 00:47:45.930 [2024-11-20 18:14:45.501457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:0088 p:1 m:0 dnr:0 00:47:45.930 [2024-11-20 18:14:45.577296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:2616 len:8 PRP1 0x200007c4c000 PRP2 0x0 00:47:45.930 [2024-11-20 18:14:45.577323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:47:45.930 [2024-11-20 18:14:45.617293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:3360 len:8 PRP1 0x200007c52000 PRP2 0x0 00:47:45.930 [2024-11-20 18:14:45.617316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:00b2 p:0 m:0 dnr:0 00:47:46.191 [2024-11-20 18:14:45.952566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:11184 len:8 PRP1 0x200007c54000 PRP2 0x0 00:47:46.191 [2024-11-20 18:14:45.952600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:47:48.133 [2024-11-20 18:14:47.604989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:48440 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:47:48.133 [2024-11-20 18:14:47.605014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:00ab p:0 m:0 dnr:0 00:47:48.705 Initializing NVMe Controllers 00:47:48.705 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:47:48.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:48.705 Initialization complete. Launching workers. 00:47:48.705 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8432, failed: 5 00:47:48.705 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1215, failed to submit 7222 00:47:48.705 success 356, unsuccessful 859, failed 0 00:47:48.705 18:14:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:48.705 18:14:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:49.276 [2024-11-20 18:14:48.907446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:157 nsid:1 lba:3768 len:8 PRP1 0x200007922000 PRP2 0x0 00:47:49.276 [2024-11-20 18:14:48.907471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:157 cdw0:0 sqhd:00d8 p:0 m:0 dnr:0 00:47:49.846 [2024-11-20 18:14:49.546882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:183 nsid:1 lba:78512 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:47:49.846 [2024-11-20 18:14:49.546904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:183 cdw0:0 sqhd:005c p:1 m:0 dnr:0 00:47:52.389 Initializing NVMe Controllers 00:47:52.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:47:52.390 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:52.390 Initialization complete. Launching workers. 00:47:52.390 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 44064, failed: 2 00:47:52.390 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2706, failed to submit 41360 00:47:52.390 success 593, unsuccessful 2113, failed 0 00:47:52.390 18:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:47:52.390 18:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:52.390 18:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:52.390 18:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:52.390 18:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:47:52.390 18:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:52.390 18:14:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:54.301 18:14:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:54.301 18:14:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3072442 00:47:54.301 18:14:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 3072442 ']' 00:47:54.301 18:14:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 3072442 00:47:54.301 18:14:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:47:54.301 18:14:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:54.301 18:14:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3072442 00:47:54.301 18:14:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:47:54.301 18:14:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:47:54.301 18:14:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3072442' 00:47:54.301 killing process with pid 3072442 00:47:54.301 18:14:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 3072442 00:47:54.301 18:14:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 3072442 00:47:54.301 00:47:54.301 real 0m12.275s 00:47:54.301 user 0m49.951s 00:47:54.301 sys 0m1.948s 00:47:54.301 18:14:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:54.301 18:14:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:54.301 ************************************ 00:47:54.301 END TEST spdk_target_abort 00:47:54.301 ************************************ 00:47:54.301 18:14:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:47:54.301 18:14:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:47:54.301 18:14:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:54.301 18:14:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:54.301 ************************************ 00:47:54.301 START TEST kernel_target_abort 00:47:54.301 ************************************ 00:47:54.301 18:14:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:47:54.301 18:14:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:47:54.301 18:14:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:47:57.599 Waiting for block devices as requested 00:47:57.599 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:57.860 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:57.860 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:57.860 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:57.860 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:58.121 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:58.121 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:58.121 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:58.381 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:47:58.381 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:58.641 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:58.641 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:58.641 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:58.901 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:58.901 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:58.901 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:59.162 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:59.426 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:47:59.426 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:47:59.426 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:47:59.426 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:47:59.426 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:47:59.426 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:47:59.426 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:47:59.426 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:47:59.427 No valid GPT data, bailing 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:47:59.427 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:47:59.695 00:47:59.695 Discovery Log Number of Records 2, Generation counter 2 00:47:59.695 =====Discovery Log Entry 0====== 00:47:59.695 trtype: tcp 00:47:59.695 adrfam: ipv4 00:47:59.695 subtype: current discovery subsystem 00:47:59.695 treq: not specified, sq flow control disable supported 00:47:59.695 portid: 1 00:47:59.695 trsvcid: 4420 00:47:59.695 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:47:59.695 traddr: 10.0.0.1 00:47:59.695 eflags: none 00:47:59.695 sectype: none 00:47:59.695 =====Discovery Log Entry 1====== 00:47:59.695 trtype: tcp 00:47:59.695 adrfam: ipv4 00:47:59.695 subtype: nvme subsystem 00:47:59.695 treq: not specified, sq flow control disable supported 00:47:59.695 portid: 1 00:47:59.695 trsvcid: 4420 00:47:59.695 subnqn: nqn.2016-06.io.spdk:testnqn 00:47:59.695 traddr: 10.0.0.1 00:47:59.695 eflags: none 00:47:59.695 sectype: none 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:59.695 18:14:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:48:02.994 Initializing NVMe Controllers 00:48:02.994 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:48:02.994 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:48:02.994 Initialization complete. Launching workers. 00:48:02.994 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67428, failed: 0 00:48:02.994 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67428, failed to submit 0 00:48:02.994 success 0, unsuccessful 67428, failed 0 00:48:02.994 18:15:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:48:02.994 18:15:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:48:06.293 Initializing NVMe Controllers 00:48:06.293 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:48:06.293 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:48:06.293 Initialization complete. Launching workers. 00:48:06.293 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 116621, failed: 0 00:48:06.293 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29362, failed to submit 87259 00:48:06.293 success 0, unsuccessful 29362, failed 0 00:48:06.293 18:15:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:48:06.293 18:15:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:48:08.838 Initializing NVMe Controllers 00:48:08.838 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:48:08.838 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:48:08.838 Initialization complete. Launching workers. 00:48:08.838 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146298, failed: 0 00:48:08.838 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36622, failed to submit 109676 00:48:08.838 success 0, unsuccessful 36622, failed 0 00:48:08.838 18:15:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:48:08.838 18:15:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:48:08.838 18:15:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:48:08.838 18:15:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:48:08.838 18:15:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:48:08.838 18:15:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:48:08.838 18:15:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:48:08.838 18:15:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:48:08.838 18:15:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:48:09.099 18:15:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:48:12.400 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:48:12.400 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:48:12.400 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:48:12.400 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:48:12.400 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:48:12.661 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:48:12.661 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:48:12.661 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:48:12.661 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:48:12.661 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:48:12.661 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:48:12.661 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:48:12.661 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:48:12.661 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:48:12.661 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:48:12.661 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:48:14.573 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:48:14.833 00:48:14.833 real 0m20.541s 00:48:14.833 user 0m9.806s 00:48:14.833 sys 0m6.364s 00:48:14.833 18:15:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:48:14.833 18:15:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:48:14.833 ************************************ 00:48:14.833 END TEST kernel_target_abort 00:48:14.833 ************************************ 00:48:14.833 18:15:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:48:14.833 18:15:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:48:14.833 18:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:48:14.833 18:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:48:14.833 18:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:14.833 18:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:48:14.833 18:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:14.833 18:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:14.833 rmmod nvme_tcp 00:48:14.833 rmmod nvme_fabrics 00:48:14.833 rmmod nvme_keyring 00:48:14.833 18:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:14.833 18:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:48:14.833 18:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:48:14.833 18:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 3072442 ']' 00:48:14.833 18:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 3072442 00:48:14.834 18:15:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 3072442 ']' 00:48:14.834 18:15:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 3072442 00:48:14.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3072442) - No such process 00:48:14.834 18:15:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 3072442 is not found' 00:48:14.834 Process with pid 3072442 is not found 00:48:14.834 18:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:48:14.834 18:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:48:18.132 Waiting for block devices as requested 00:48:18.132 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:48:18.392 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:48:18.392 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:48:18.392 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:48:18.652 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:48:18.652 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:48:18.652 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:48:18.912 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:48:18.912 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:48:19.172 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:48:19.172 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:48:19.172 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:48:19.172 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:48:19.438 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:48:19.438 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:48:19.438 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:48:19.698 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:48:19.958 18:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:48:19.958 18:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:48:19.958 18:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:48:19.958 18:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:48:19.958 18:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:48:19.958 18:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:48:19.958 18:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:19.958 18:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:48:19.958 18:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:19.958 18:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:48:19.958 18:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:21.869 18:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:48:22.130 00:48:22.131 real 0m52.602s 00:48:22.131 user 1m5.245s 00:48:22.131 sys 0m19.227s 00:48:22.131 18:15:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:48:22.131 18:15:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:48:22.131 ************************************ 00:48:22.131 END TEST nvmf_abort_qd_sizes 00:48:22.131 ************************************ 00:48:22.131 18:15:21 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:48:22.131 18:15:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:48:22.131 18:15:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:48:22.131 18:15:21 -- common/autotest_common.sh@10 -- # set +x 00:48:22.131 ************************************ 00:48:22.131 START TEST keyring_file 00:48:22.131 ************************************ 00:48:22.131 18:15:21 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:48:22.131 * Looking for test storage... 00:48:22.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:48:22.131 18:15:21 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:48:22.131 18:15:21 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:48:22.131 18:15:21 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:48:22.131 18:15:22 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@345 -- # : 1 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@353 -- # local d=1 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@355 -- # echo 1 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@353 -- # local d=2 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@355 -- # echo 2 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:22.131 18:15:22 keyring_file -- scripts/common.sh@368 -- # return 0 00:48:22.131 18:15:22 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:22.131 18:15:22 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:48:22.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:22.131 --rc genhtml_branch_coverage=1 00:48:22.131 --rc genhtml_function_coverage=1 00:48:22.131 --rc genhtml_legend=1 00:48:22.131 --rc geninfo_all_blocks=1 00:48:22.131 --rc geninfo_unexecuted_blocks=1 00:48:22.131 00:48:22.131 ' 00:48:22.131 18:15:22 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:48:22.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:22.131 --rc genhtml_branch_coverage=1 00:48:22.131 --rc genhtml_function_coverage=1 00:48:22.131 --rc genhtml_legend=1 00:48:22.131 --rc geninfo_all_blocks=1 00:48:22.131 --rc geninfo_unexecuted_blocks=1 00:48:22.131 00:48:22.131 ' 00:48:22.131 18:15:22 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:48:22.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:22.131 --rc genhtml_branch_coverage=1 00:48:22.131 --rc genhtml_function_coverage=1 00:48:22.131 --rc genhtml_legend=1 00:48:22.131 --rc geninfo_all_blocks=1 00:48:22.131 --rc geninfo_unexecuted_blocks=1 00:48:22.131 00:48:22.131 ' 00:48:22.131 18:15:22 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:48:22.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:22.131 --rc genhtml_branch_coverage=1 00:48:22.131 --rc genhtml_function_coverage=1 00:48:22.131 --rc genhtml_legend=1 00:48:22.131 --rc geninfo_all_blocks=1 00:48:22.131 --rc geninfo_unexecuted_blocks=1 00:48:22.131 00:48:22.131 ' 00:48:22.131 18:15:22 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:48:22.131 18:15:22 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:48:22.131 18:15:22 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:48:22.131 18:15:22 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:22.131 18:15:22 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:22.131 18:15:22 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:22.131 18:15:22 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:22.131 18:15:22 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:22.131 18:15:22 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:22.131 18:15:22 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:22.131 18:15:22 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:22.131 18:15:22 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:22.131 18:15:22 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:48:22.392 18:15:22 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:48:22.392 18:15:22 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:22.392 18:15:22 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:22.392 18:15:22 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:22.392 18:15:22 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:22.392 18:15:22 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:22.392 18:15:22 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:22.392 18:15:22 keyring_file -- paths/export.sh@5 -- # export PATH 00:48:22.392 18:15:22 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@51 -- # : 0 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:22.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:48:22.392 18:15:22 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:48:22.392 18:15:22 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:48:22.392 18:15:22 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:48:22.392 18:15:22 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:48:22.392 18:15:22 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:48:22.392 18:15:22 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ASsC2o0eQ8 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@729 -- # python - 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ASsC2o0eQ8 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ASsC2o0eQ8 00:48:22.392 18:15:22 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ASsC2o0eQ8 00:48:22.392 18:15:22 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@17 -- # name=key1 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sMQEtXUGPF 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:48:22.392 18:15:22 keyring_file -- nvmf/common.sh@729 -- # python - 00:48:22.392 18:15:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sMQEtXUGPF 00:48:22.393 18:15:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sMQEtXUGPF 00:48:22.393 18:15:22 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.sMQEtXUGPF 00:48:22.393 18:15:22 keyring_file -- keyring/file.sh@30 -- # tgtpid=3083183 00:48:22.393 18:15:22 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3083183 00:48:22.393 18:15:22 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:48:22.393 18:15:22 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3083183 ']' 00:48:22.393 18:15:22 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:22.393 18:15:22 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:48:22.393 18:15:22 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:22.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:22.393 18:15:22 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:48:22.393 18:15:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:22.393 [2024-11-20 18:15:22.231840] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:48:22.393 [2024-11-20 18:15:22.231900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3083183 ] 00:48:22.653 [2024-11-20 18:15:22.308890] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:22.653 [2024-11-20 18:15:22.341041] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:48:23.224 18:15:23 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:48:23.224 18:15:23 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:48:23.224 18:15:23 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:48:23.224 18:15:23 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:23.224 18:15:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:23.224 [2024-11-20 18:15:23.022033] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:23.224 null0 00:48:23.224 [2024-11-20 18:15:23.054082] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:48:23.224 [2024-11-20 18:15:23.054499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:48:23.224 18:15:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:23.224 18:15:23 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:48:23.224 18:15:23 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:48:23.224 18:15:23 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:48:23.224 18:15:23 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:48:23.224 18:15:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:23.224 18:15:23 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:48:23.224 18:15:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:23.224 18:15:23 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:48:23.224 18:15:23 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:23.224 18:15:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:23.224 [2024-11-20 18:15:23.086150] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:48:23.224 request: 00:48:23.224 { 00:48:23.224 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:48:23.224 "secure_channel": false, 00:48:23.224 "listen_address": { 00:48:23.224 "trtype": "tcp", 00:48:23.224 "traddr": "127.0.0.1", 00:48:23.224 "trsvcid": "4420" 00:48:23.224 }, 00:48:23.224 "method": "nvmf_subsystem_add_listener", 00:48:23.224 "req_id": 1 00:48:23.224 } 00:48:23.225 Got JSON-RPC error response 00:48:23.225 response: 00:48:23.225 { 00:48:23.225 "code": -32602, 00:48:23.225 "message": "Invalid parameters" 00:48:23.225 } 00:48:23.225 18:15:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:48:23.225 18:15:23 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:48:23.225 18:15:23 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:48:23.225 18:15:23 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:48:23.225 18:15:23 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:48:23.225 18:15:23 keyring_file -- keyring/file.sh@47 -- # bperfpid=3083258 00:48:23.225 18:15:23 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3083258 /var/tmp/bperf.sock 00:48:23.225 18:15:23 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:48:23.225 18:15:23 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3083258 ']' 00:48:23.225 18:15:23 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:23.225 18:15:23 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:48:23.225 18:15:23 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:23.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:23.225 18:15:23 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:48:23.225 18:15:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:23.485 [2024-11-20 18:15:23.144675] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:48:23.485 [2024-11-20 18:15:23.144741] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3083258 ] 00:48:23.485 [2024-11-20 18:15:23.222704] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:23.485 [2024-11-20 18:15:23.270108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:48:24.058 18:15:23 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:48:24.058 18:15:23 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:48:24.058 18:15:23 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ASsC2o0eQ8 00:48:24.058 18:15:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ASsC2o0eQ8 00:48:24.317 18:15:24 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sMQEtXUGPF 00:48:24.317 18:15:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sMQEtXUGPF 00:48:24.578 18:15:24 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:48:24.578 18:15:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:48:24.578 18:15:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:24.578 18:15:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:24.578 18:15:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:24.578 18:15:24 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ASsC2o0eQ8 == \/\t\m\p\/\t\m\p\.\A\S\s\C\2\o\0\e\Q\8 ]] 00:48:24.578 18:15:24 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:48:24.578 18:15:24 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:48:24.578 18:15:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:24.578 18:15:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:24.578 18:15:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:24.837 18:15:24 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.sMQEtXUGPF == \/\t\m\p\/\t\m\p\.\s\M\Q\E\t\X\U\G\P\F ]] 00:48:24.837 18:15:24 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:48:24.837 18:15:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:24.837 18:15:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:24.838 18:15:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:24.838 18:15:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:24.838 18:15:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:25.097 18:15:24 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:48:25.097 18:15:24 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:48:25.097 18:15:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:25.097 18:15:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:25.097 18:15:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:25.097 18:15:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:25.097 18:15:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:25.097 18:15:24 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:48:25.097 18:15:24 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:25.097 18:15:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:25.357 [2024-11-20 18:15:25.134860] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:25.357 nvme0n1 00:48:25.357 18:15:25 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:48:25.357 18:15:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:25.357 18:15:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:25.357 18:15:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:25.357 18:15:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:25.357 18:15:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:25.616 18:15:25 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:48:25.616 18:15:25 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:48:25.616 18:15:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:25.616 18:15:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:25.616 18:15:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:25.616 18:15:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:25.616 18:15:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:25.876 18:15:25 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:48:25.876 18:15:25 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:48:25.876 Running I/O for 1 seconds... 00:48:26.813 19593.00 IOPS, 76.54 MiB/s 00:48:26.813 Latency(us) 00:48:26.813 [2024-11-20T17:15:26.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:26.813 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:48:26.813 nvme0n1 : 1.00 19649.55 76.76 0.00 0.00 6503.56 3631.79 18896.21 00:48:26.813 [2024-11-20T17:15:26.729Z] =================================================================================================================== 00:48:26.813 [2024-11-20T17:15:26.729Z] Total : 19649.55 76.76 0.00 0.00 6503.56 3631.79 18896.21 00:48:26.813 { 00:48:26.813 "results": [ 00:48:26.813 { 00:48:26.813 "job": "nvme0n1", 00:48:26.814 "core_mask": "0x2", 00:48:26.814 "workload": "randrw", 00:48:26.814 "percentage": 50, 00:48:26.814 "status": "finished", 00:48:26.814 "queue_depth": 128, 00:48:26.814 "io_size": 4096, 00:48:26.814 "runtime": 1.003636, 00:48:26.814 "iops": 19649.554220852977, 00:48:26.814 "mibps": 76.75607117520694, 00:48:26.814 "io_failed": 0, 00:48:26.814 "io_timeout": 0, 00:48:26.814 "avg_latency_us": 6503.555316667512, 00:48:26.814 "min_latency_us": 3631.786666666667, 00:48:26.814 "max_latency_us": 18896.213333333333 00:48:26.814 } 00:48:26.814 ], 00:48:26.814 "core_count": 1 00:48:26.814 } 00:48:26.814 18:15:26 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:48:26.814 18:15:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:48:27.075 18:15:26 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:48:27.075 18:15:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:27.075 18:15:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:27.075 18:15:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:27.075 18:15:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:27.075 18:15:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:27.335 18:15:27 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:48:27.335 18:15:27 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:48:27.335 18:15:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:27.335 18:15:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:27.335 18:15:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:27.335 18:15:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:27.335 18:15:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:27.335 18:15:27 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:48:27.335 18:15:27 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:48:27.335 18:15:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:48:27.335 18:15:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:48:27.335 18:15:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:48:27.335 18:15:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:27.335 18:15:27 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:48:27.335 18:15:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:27.335 18:15:27 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:48:27.335 18:15:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:48:27.595 [2024-11-20 18:15:27.386452] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:48:27.595 [2024-11-20 18:15:27.386496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1617700 (107): Transport endpoint is not connected 00:48:27.595 [2024-11-20 18:15:27.387492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1617700 (9): Bad file descriptor 00:48:27.595 [2024-11-20 18:15:27.388494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:48:27.595 [2024-11-20 18:15:27.388501] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:48:27.595 [2024-11-20 18:15:27.388507] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:48:27.595 [2024-11-20 18:15:27.388513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:48:27.595 request: 00:48:27.595 { 00:48:27.595 "name": "nvme0", 00:48:27.595 "trtype": "tcp", 00:48:27.595 "traddr": "127.0.0.1", 00:48:27.595 "adrfam": "ipv4", 00:48:27.595 "trsvcid": "4420", 00:48:27.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:27.595 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:27.595 "prchk_reftag": false, 00:48:27.595 "prchk_guard": false, 00:48:27.595 "hdgst": false, 00:48:27.595 "ddgst": false, 00:48:27.595 "psk": "key1", 00:48:27.595 "allow_unrecognized_csi": false, 00:48:27.595 "method": "bdev_nvme_attach_controller", 00:48:27.595 "req_id": 1 00:48:27.595 } 00:48:27.595 Got JSON-RPC error response 00:48:27.595 response: 00:48:27.595 { 00:48:27.595 "code": -5, 00:48:27.595 "message": "Input/output error" 00:48:27.595 } 00:48:27.595 18:15:27 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:48:27.595 18:15:27 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:48:27.595 18:15:27 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:48:27.595 18:15:27 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:48:27.595 18:15:27 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:48:27.595 18:15:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:27.595 18:15:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:27.595 18:15:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:27.595 18:15:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:27.595 18:15:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:27.874 18:15:27 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:48:27.874 18:15:27 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:48:27.874 18:15:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:27.874 18:15:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:27.874 18:15:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:27.874 18:15:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:27.874 18:15:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:27.874 18:15:27 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:48:27.874 18:15:27 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:48:27.874 18:15:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:48:28.133 18:15:27 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:48:28.133 18:15:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:48:28.393 18:15:28 keyring_file -- keyring/file.sh@78 -- # jq length 00:48:28.393 18:15:28 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:48:28.393 18:15:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:28.393 18:15:28 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:48:28.393 18:15:28 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.ASsC2o0eQ8 00:48:28.393 18:15:28 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ASsC2o0eQ8 00:48:28.393 18:15:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:48:28.393 18:15:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ASsC2o0eQ8 00:48:28.393 18:15:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:48:28.393 18:15:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:28.393 18:15:28 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:48:28.394 18:15:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:28.394 18:15:28 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ASsC2o0eQ8 00:48:28.394 18:15:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ASsC2o0eQ8 00:48:28.654 [2024-11-20 18:15:28.407160] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ASsC2o0eQ8': 0100660 00:48:28.654 [2024-11-20 18:15:28.407179] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:48:28.654 request: 00:48:28.654 { 00:48:28.654 "name": "key0", 00:48:28.654 "path": "/tmp/tmp.ASsC2o0eQ8", 00:48:28.654 "method": "keyring_file_add_key", 00:48:28.654 "req_id": 1 00:48:28.654 } 00:48:28.654 Got JSON-RPC error response 00:48:28.654 response: 00:48:28.654 { 00:48:28.654 "code": -1, 00:48:28.654 "message": "Operation not permitted" 00:48:28.654 } 00:48:28.654 18:15:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:48:28.654 18:15:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:48:28.654 18:15:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:48:28.654 18:15:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:48:28.654 18:15:28 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.ASsC2o0eQ8 00:48:28.654 18:15:28 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ASsC2o0eQ8 00:48:28.654 18:15:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ASsC2o0eQ8 00:48:28.914 18:15:28 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.ASsC2o0eQ8 00:48:28.914 18:15:28 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:48:28.914 18:15:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:28.914 18:15:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:28.914 18:15:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:28.914 18:15:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:28.914 18:15:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:28.914 18:15:28 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:48:28.914 18:15:28 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:28.914 18:15:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:48:28.914 18:15:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:28.914 18:15:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:48:28.914 18:15:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:28.914 18:15:28 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:48:28.914 18:15:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:28.914 18:15:28 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:28.914 18:15:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:29.173 [2024-11-20 18:15:28.960560] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ASsC2o0eQ8': No such file or directory 00:48:29.173 [2024-11-20 18:15:28.960573] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:48:29.173 [2024-11-20 18:15:28.960586] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:48:29.173 [2024-11-20 18:15:28.960591] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:48:29.173 [2024-11-20 18:15:28.960597] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:48:29.173 [2024-11-20 18:15:28.960602] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:48:29.173 request: 00:48:29.173 { 00:48:29.173 "name": "nvme0", 00:48:29.173 "trtype": "tcp", 00:48:29.173 "traddr": "127.0.0.1", 00:48:29.173 "adrfam": "ipv4", 00:48:29.174 "trsvcid": "4420", 00:48:29.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:29.174 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:29.174 "prchk_reftag": false, 00:48:29.174 "prchk_guard": false, 00:48:29.174 "hdgst": false, 00:48:29.174 "ddgst": false, 00:48:29.174 "psk": "key0", 00:48:29.174 "allow_unrecognized_csi": false, 00:48:29.174 "method": "bdev_nvme_attach_controller", 00:48:29.174 "req_id": 1 00:48:29.174 } 00:48:29.174 Got JSON-RPC error response 00:48:29.174 response: 00:48:29.174 { 00:48:29.174 "code": -19, 00:48:29.174 "message": "No such device" 00:48:29.174 } 00:48:29.174 18:15:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:48:29.174 18:15:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:48:29.174 18:15:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:48:29.174 18:15:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:48:29.174 18:15:28 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:48:29.174 18:15:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:48:29.434 18:15:29 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:48:29.434 18:15:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:48:29.434 18:15:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:48:29.434 18:15:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:48:29.434 18:15:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:48:29.434 18:15:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:48:29.434 18:15:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2tpHPFK5sv 00:48:29.434 18:15:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:48:29.434 18:15:29 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:48:29.434 18:15:29 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:48:29.434 18:15:29 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:48:29.434 18:15:29 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:48:29.434 18:15:29 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:48:29.434 18:15:29 keyring_file -- nvmf/common.sh@729 -- # python - 00:48:29.434 18:15:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2tpHPFK5sv 00:48:29.434 18:15:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2tpHPFK5sv 00:48:29.434 18:15:29 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.2tpHPFK5sv 00:48:29.434 18:15:29 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2tpHPFK5sv 00:48:29.434 18:15:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2tpHPFK5sv 00:48:29.693 18:15:29 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:29.693 18:15:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:29.953 nvme0n1 00:48:29.953 18:15:29 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:48:29.953 18:15:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:29.953 18:15:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:29.953 18:15:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:29.953 18:15:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:29.953 18:15:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:29.953 18:15:29 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:48:29.953 18:15:29 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:48:29.953 18:15:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:48:30.212 18:15:30 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:48:30.212 18:15:30 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:48:30.212 18:15:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:30.213 18:15:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:30.213 18:15:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:30.472 18:15:30 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:48:30.472 18:15:30 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:48:30.472 18:15:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:30.472 18:15:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:30.472 18:15:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:30.472 18:15:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:30.472 18:15:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:30.732 18:15:30 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:48:30.732 18:15:30 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:48:30.732 18:15:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:48:30.732 18:15:30 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:48:30.732 18:15:30 keyring_file -- keyring/file.sh@105 -- # jq length 00:48:30.732 18:15:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:30.993 18:15:30 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:48:30.993 18:15:30 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2tpHPFK5sv 00:48:30.993 18:15:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2tpHPFK5sv 00:48:31.253 18:15:30 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sMQEtXUGPF 00:48:31.253 18:15:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sMQEtXUGPF 00:48:31.253 18:15:31 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:31.253 18:15:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:31.512 nvme0n1 00:48:31.512 18:15:31 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:48:31.512 18:15:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:48:31.772 18:15:31 keyring_file -- keyring/file.sh@113 -- # config='{ 00:48:31.772 "subsystems": [ 00:48:31.772 { 00:48:31.772 "subsystem": "keyring", 00:48:31.772 "config": [ 00:48:31.772 { 00:48:31.772 "method": "keyring_file_add_key", 00:48:31.772 "params": { 00:48:31.772 "name": "key0", 00:48:31.772 "path": "/tmp/tmp.2tpHPFK5sv" 00:48:31.772 } 00:48:31.772 }, 00:48:31.772 { 00:48:31.772 "method": "keyring_file_add_key", 00:48:31.772 "params": { 00:48:31.772 "name": "key1", 00:48:31.772 "path": "/tmp/tmp.sMQEtXUGPF" 00:48:31.772 } 00:48:31.772 } 00:48:31.772 ] 00:48:31.772 }, 00:48:31.772 { 00:48:31.772 "subsystem": "iobuf", 00:48:31.772 "config": [ 00:48:31.772 { 00:48:31.772 "method": "iobuf_set_options", 00:48:31.772 "params": { 00:48:31.772 "small_pool_count": 8192, 00:48:31.772 "large_pool_count": 1024, 00:48:31.772 "small_bufsize": 8192, 00:48:31.772 "large_bufsize": 135168 00:48:31.772 } 00:48:31.772 } 00:48:31.772 ] 00:48:31.772 }, 00:48:31.772 { 00:48:31.772 "subsystem": "sock", 00:48:31.772 "config": [ 00:48:31.772 { 00:48:31.772 "method": "sock_set_default_impl", 00:48:31.772 "params": { 00:48:31.772 "impl_name": "posix" 00:48:31.772 } 00:48:31.772 }, 00:48:31.772 { 00:48:31.772 "method": "sock_impl_set_options", 00:48:31.772 "params": { 00:48:31.772 "impl_name": "ssl", 00:48:31.772 "recv_buf_size": 4096, 00:48:31.772 "send_buf_size": 4096, 00:48:31.772 "enable_recv_pipe": true, 00:48:31.772 "enable_quickack": false, 00:48:31.772 "enable_placement_id": 0, 00:48:31.772 "enable_zerocopy_send_server": true, 00:48:31.772 "enable_zerocopy_send_client": false, 00:48:31.772 "zerocopy_threshold": 0, 00:48:31.772 "tls_version": 0, 00:48:31.772 "enable_ktls": false 00:48:31.772 } 00:48:31.772 }, 00:48:31.772 { 00:48:31.772 "method": "sock_impl_set_options", 00:48:31.772 "params": { 00:48:31.772 "impl_name": "posix", 00:48:31.772 "recv_buf_size": 2097152, 00:48:31.772 "send_buf_size": 2097152, 00:48:31.772 "enable_recv_pipe": true, 00:48:31.772 "enable_quickack": false, 00:48:31.772 "enable_placement_id": 0, 00:48:31.772 "enable_zerocopy_send_server": true, 00:48:31.772 "enable_zerocopy_send_client": false, 00:48:31.772 "zerocopy_threshold": 0, 00:48:31.772 "tls_version": 0, 00:48:31.772 "enable_ktls": false 00:48:31.772 } 00:48:31.772 } 00:48:31.772 ] 00:48:31.772 }, 00:48:31.772 { 00:48:31.772 "subsystem": "vmd", 00:48:31.772 "config": [] 00:48:31.772 }, 00:48:31.772 { 00:48:31.772 "subsystem": "accel", 00:48:31.772 "config": [ 00:48:31.772 { 00:48:31.772 "method": "accel_set_options", 00:48:31.772 "params": { 00:48:31.772 "small_cache_size": 128, 00:48:31.772 "large_cache_size": 16, 00:48:31.772 "task_count": 2048, 00:48:31.772 "sequence_count": 2048, 00:48:31.772 "buf_count": 2048 00:48:31.772 } 00:48:31.772 } 00:48:31.772 ] 00:48:31.772 }, 00:48:31.772 { 00:48:31.772 "subsystem": "bdev", 00:48:31.772 "config": [ 00:48:31.772 { 00:48:31.772 "method": "bdev_set_options", 00:48:31.772 "params": { 00:48:31.772 "bdev_io_pool_size": 65535, 00:48:31.772 "bdev_io_cache_size": 256, 00:48:31.772 "bdev_auto_examine": true, 00:48:31.772 "iobuf_small_cache_size": 128, 00:48:31.772 "iobuf_large_cache_size": 16 00:48:31.772 } 00:48:31.772 }, 00:48:31.772 { 00:48:31.772 "method": "bdev_raid_set_options", 00:48:31.772 "params": { 00:48:31.772 "process_window_size_kb": 1024, 00:48:31.772 "process_max_bandwidth_mb_sec": 0 00:48:31.772 } 00:48:31.773 }, 00:48:31.773 { 00:48:31.773 "method": "bdev_iscsi_set_options", 00:48:31.773 "params": { 00:48:31.773 "timeout_sec": 30 00:48:31.773 } 00:48:31.773 }, 00:48:31.773 { 00:48:31.773 "method": "bdev_nvme_set_options", 00:48:31.773 "params": { 00:48:31.773 "action_on_timeout": "none", 00:48:31.773 "timeout_us": 0, 00:48:31.773 "timeout_admin_us": 0, 00:48:31.773 "keep_alive_timeout_ms": 10000, 00:48:31.773 "arbitration_burst": 0, 00:48:31.773 "low_priority_weight": 0, 00:48:31.773 "medium_priority_weight": 0, 00:48:31.773 "high_priority_weight": 0, 00:48:31.773 "nvme_adminq_poll_period_us": 10000, 00:48:31.773 "nvme_ioq_poll_period_us": 0, 00:48:31.773 "io_queue_requests": 512, 00:48:31.773 "delay_cmd_submit": true, 00:48:31.773 "transport_retry_count": 4, 00:48:31.773 "bdev_retry_count": 3, 00:48:31.773 "transport_ack_timeout": 0, 00:48:31.773 "ctrlr_loss_timeout_sec": 0, 00:48:31.773 "reconnect_delay_sec": 0, 00:48:31.773 "fast_io_fail_timeout_sec": 0, 00:48:31.773 "disable_auto_failback": false, 00:48:31.773 "generate_uuids": false, 00:48:31.773 "transport_tos": 0, 00:48:31.773 "nvme_error_stat": false, 00:48:31.773 "rdma_srq_size": 0, 00:48:31.773 "io_path_stat": false, 00:48:31.773 "allow_accel_sequence": false, 00:48:31.773 "rdma_max_cq_size": 0, 00:48:31.773 "rdma_cm_event_timeout_ms": 0, 00:48:31.773 "dhchap_digests": [ 00:48:31.773 "sha256", 00:48:31.773 "sha384", 00:48:31.773 "sha512" 00:48:31.773 ], 00:48:31.773 "dhchap_dhgroups": [ 00:48:31.773 "null", 00:48:31.773 "ffdhe2048", 00:48:31.773 "ffdhe3072", 00:48:31.773 "ffdhe4096", 00:48:31.773 "ffdhe6144", 00:48:31.773 "ffdhe8192" 00:48:31.773 ] 00:48:31.773 } 00:48:31.773 }, 00:48:31.773 { 00:48:31.773 "method": "bdev_nvme_attach_controller", 00:48:31.773 "params": { 00:48:31.773 "name": "nvme0", 00:48:31.773 "trtype": "TCP", 00:48:31.773 "adrfam": "IPv4", 00:48:31.773 "traddr": "127.0.0.1", 00:48:31.773 "trsvcid": "4420", 00:48:31.773 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:31.773 "prchk_reftag": false, 00:48:31.773 "prchk_guard": false, 00:48:31.773 "ctrlr_loss_timeout_sec": 0, 00:48:31.773 "reconnect_delay_sec": 0, 00:48:31.773 "fast_io_fail_timeout_sec": 0, 00:48:31.773 "psk": "key0", 00:48:31.773 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:31.773 "hdgst": false, 00:48:31.773 "ddgst": false 00:48:31.773 } 00:48:31.773 }, 00:48:31.773 { 00:48:31.773 "method": "bdev_nvme_set_hotplug", 00:48:31.773 "params": { 00:48:31.773 "period_us": 100000, 00:48:31.773 "enable": false 00:48:31.773 } 00:48:31.773 }, 00:48:31.773 { 00:48:31.773 "method": "bdev_wait_for_examine" 00:48:31.773 } 00:48:31.773 ] 00:48:31.773 }, 00:48:31.773 { 00:48:31.773 "subsystem": "nbd", 00:48:31.773 "config": [] 00:48:31.773 } 00:48:31.773 ] 00:48:31.773 }' 00:48:31.773 18:15:31 keyring_file -- keyring/file.sh@115 -- # killprocess 3083258 00:48:31.773 18:15:31 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3083258 ']' 00:48:31.773 18:15:31 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3083258 00:48:31.773 18:15:31 keyring_file -- common/autotest_common.sh@955 -- # uname 00:48:31.773 18:15:31 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:48:31.773 18:15:31 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3083258 00:48:31.773 18:15:31 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:48:31.773 18:15:31 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:48:31.773 18:15:31 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3083258' 00:48:31.773 killing process with pid 3083258 00:48:31.773 18:15:31 keyring_file -- common/autotest_common.sh@969 -- # kill 3083258 00:48:31.773 Received shutdown signal, test time was about 1.000000 seconds 00:48:31.773 00:48:31.773 Latency(us) 00:48:31.773 [2024-11-20T17:15:31.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:31.773 [2024-11-20T17:15:31.689Z] =================================================================================================================== 00:48:31.773 [2024-11-20T17:15:31.689Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:31.773 18:15:31 keyring_file -- common/autotest_common.sh@974 -- # wait 3083258 00:48:32.033 18:15:31 keyring_file -- keyring/file.sh@118 -- # bperfpid=3085038 00:48:32.033 18:15:31 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3085038 /var/tmp/bperf.sock 00:48:32.033 18:15:31 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3085038 ']' 00:48:32.033 18:15:31 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:32.033 18:15:31 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:48:32.033 18:15:31 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:48:32.033 18:15:31 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:32.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:32.033 18:15:31 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:48:32.033 18:15:31 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:48:32.033 "subsystems": [ 00:48:32.033 { 00:48:32.033 "subsystem": "keyring", 00:48:32.033 "config": [ 00:48:32.033 { 00:48:32.033 "method": "keyring_file_add_key", 00:48:32.033 "params": { 00:48:32.033 "name": "key0", 00:48:32.033 "path": "/tmp/tmp.2tpHPFK5sv" 00:48:32.033 } 00:48:32.033 }, 00:48:32.033 { 00:48:32.033 "method": "keyring_file_add_key", 00:48:32.033 "params": { 00:48:32.033 "name": "key1", 00:48:32.033 "path": "/tmp/tmp.sMQEtXUGPF" 00:48:32.033 } 00:48:32.033 } 00:48:32.033 ] 00:48:32.033 }, 00:48:32.033 { 00:48:32.033 "subsystem": "iobuf", 00:48:32.033 "config": [ 00:48:32.033 { 00:48:32.033 "method": "iobuf_set_options", 00:48:32.033 "params": { 00:48:32.033 "small_pool_count": 8192, 00:48:32.033 "large_pool_count": 1024, 00:48:32.033 "small_bufsize": 8192, 00:48:32.033 "large_bufsize": 135168 00:48:32.033 } 00:48:32.033 } 00:48:32.033 ] 00:48:32.033 }, 00:48:32.033 { 00:48:32.033 "subsystem": "sock", 00:48:32.033 "config": [ 00:48:32.033 { 00:48:32.033 "method": "sock_set_default_impl", 00:48:32.033 "params": { 00:48:32.033 "impl_name": "posix" 00:48:32.033 } 00:48:32.033 }, 00:48:32.033 { 00:48:32.033 "method": "sock_impl_set_options", 00:48:32.033 "params": { 00:48:32.033 "impl_name": "ssl", 00:48:32.033 "recv_buf_size": 4096, 00:48:32.033 "send_buf_size": 4096, 00:48:32.033 "enable_recv_pipe": true, 00:48:32.033 "enable_quickack": false, 00:48:32.033 "enable_placement_id": 0, 00:48:32.033 "enable_zerocopy_send_server": true, 00:48:32.033 "enable_zerocopy_send_client": false, 00:48:32.033 "zerocopy_threshold": 0, 00:48:32.033 "tls_version": 0, 00:48:32.033 "enable_ktls": false 00:48:32.033 } 00:48:32.033 }, 00:48:32.033 { 00:48:32.033 "method": "sock_impl_set_options", 00:48:32.033 "params": { 00:48:32.033 "impl_name": "posix", 00:48:32.033 "recv_buf_size": 2097152, 00:48:32.033 "send_buf_size": 2097152, 00:48:32.033 "enable_recv_pipe": true, 00:48:32.033 "enable_quickack": false, 00:48:32.033 "enable_placement_id": 0, 00:48:32.033 "enable_zerocopy_send_server": true, 00:48:32.033 "enable_zerocopy_send_client": false, 00:48:32.033 "zerocopy_threshold": 0, 00:48:32.033 "tls_version": 0, 00:48:32.033 "enable_ktls": false 00:48:32.033 } 00:48:32.033 } 00:48:32.033 ] 00:48:32.033 }, 00:48:32.033 { 00:48:32.033 "subsystem": "vmd", 00:48:32.033 "config": [] 00:48:32.033 }, 00:48:32.033 { 00:48:32.033 "subsystem": "accel", 00:48:32.033 "config": [ 00:48:32.033 { 00:48:32.033 "method": "accel_set_options", 00:48:32.033 "params": { 00:48:32.033 "small_cache_size": 128, 00:48:32.033 "large_cache_size": 16, 00:48:32.033 "task_count": 2048, 00:48:32.033 "sequence_count": 2048, 00:48:32.033 "buf_count": 2048 00:48:32.033 } 00:48:32.033 } 00:48:32.033 ] 00:48:32.033 }, 00:48:32.033 { 00:48:32.033 "subsystem": "bdev", 00:48:32.033 "config": [ 00:48:32.033 { 00:48:32.033 "method": "bdev_set_options", 00:48:32.033 "params": { 00:48:32.033 "bdev_io_pool_size": 65535, 00:48:32.033 "bdev_io_cache_size": 256, 00:48:32.033 "bdev_auto_examine": true, 00:48:32.033 "iobuf_small_cache_size": 128, 00:48:32.033 "iobuf_large_cache_size": 16 00:48:32.033 } 00:48:32.033 }, 00:48:32.033 { 00:48:32.033 "method": "bdev_raid_set_options", 00:48:32.033 "params": { 00:48:32.033 "process_window_size_kb": 1024, 00:48:32.033 "process_max_bandwidth_mb_sec": 0 00:48:32.033 } 00:48:32.033 }, 00:48:32.033 { 00:48:32.033 "method": "bdev_iscsi_set_options", 00:48:32.033 "params": { 00:48:32.033 "timeout_sec": 30 00:48:32.033 } 00:48:32.033 }, 00:48:32.033 { 00:48:32.033 "method": "bdev_nvme_set_options", 00:48:32.033 "params": { 00:48:32.033 "action_on_timeout": "none", 00:48:32.033 "timeout_us": 0, 00:48:32.033 "timeout_admin_us": 0, 00:48:32.033 "keep_alive_timeout_ms": 10000, 00:48:32.033 "arbitration_burst": 0, 00:48:32.033 "low_priority_weight": 0, 00:48:32.033 "medium_priority_weight": 0, 00:48:32.033 "high_priority_weight": 0, 00:48:32.033 "nvme_adminq_poll_period_us": 10000, 00:48:32.033 "nvme_ioq_poll_period_us": 0, 00:48:32.033 "io_queue_requests": 512, 00:48:32.033 "delay_cmd_submit": true, 00:48:32.033 "transport_retry_count": 4, 00:48:32.033 "bdev_retry_count": 3, 00:48:32.033 "transport_ack_timeout": 0, 00:48:32.033 "ctrlr_loss_timeout_sec": 0, 00:48:32.033 "reconnect_delay_sec": 0, 00:48:32.033 "fast_io_fail_timeout_sec": 0, 00:48:32.033 "disable_auto_failback": false, 00:48:32.033 "generate_uuids": false, 00:48:32.033 "transport_tos": 0, 00:48:32.033 "nvme_error_stat": false, 00:48:32.033 "rdma_srq_size": 0, 00:48:32.033 "io_path_stat": false, 00:48:32.033 "allow_accel_sequence": false, 00:48:32.033 "rdma_max_cq_size": 0, 00:48:32.033 "rdma_cm_event_timeout_ms": 0, 00:48:32.033 "dhchap_digests": [ 00:48:32.033 "sha256", 00:48:32.033 "sha384", 00:48:32.033 "sha512" 00:48:32.033 ], 00:48:32.033 "dhchap_dhgroups": [ 00:48:32.033 "null", 00:48:32.033 "ffdhe2048", 00:48:32.033 "ffdhe3072", 00:48:32.033 "ffdhe4096", 00:48:32.033 "ffdhe6144", 00:48:32.033 "ffdhe8192" 00:48:32.033 ] 00:48:32.033 } 00:48:32.033 }, 00:48:32.033 { 00:48:32.033 "method": "bdev_nvme_attach_controller", 00:48:32.033 "params": { 00:48:32.033 "name": "nvme0", 00:48:32.033 "trtype": "TCP", 00:48:32.033 "adrfam": "IPv4", 00:48:32.033 "traddr": "127.0.0.1", 00:48:32.033 "trsvcid": "4420", 00:48:32.033 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:32.033 "prchk_reftag": false, 00:48:32.033 "prchk_guard": false, 00:48:32.033 "ctrlr_loss_timeout_sec": 0, 00:48:32.033 "reconnect_delay_sec": 0, 00:48:32.033 "fast_io_fail_timeout_sec": 0, 00:48:32.033 "psk": "key0", 00:48:32.033 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:32.033 "hdgst": false, 00:48:32.033 "ddgst": false 00:48:32.033 } 00:48:32.033 }, 00:48:32.033 { 00:48:32.033 "method": "bdev_nvme_set_hotplug", 00:48:32.033 "params": { 00:48:32.033 "period_us": 100000, 00:48:32.034 "enable": false 00:48:32.034 } 00:48:32.034 }, 00:48:32.034 { 00:48:32.034 "method": "bdev_wait_for_examine" 00:48:32.034 } 00:48:32.034 ] 00:48:32.034 }, 00:48:32.034 { 00:48:32.034 "subsystem": "nbd", 00:48:32.034 "config": [] 00:48:32.034 } 00:48:32.034 ] 00:48:32.034 }' 00:48:32.034 18:15:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:32.034 [2024-11-20 18:15:31.784787] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:48:32.034 [2024-11-20 18:15:31.784842] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3085038 ] 00:48:32.034 [2024-11-20 18:15:31.861542] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:32.034 [2024-11-20 18:15:31.888355] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:48:32.293 [2024-11-20 18:15:32.025486] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:32.940 18:15:32 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:48:32.940 18:15:32 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:48:32.940 18:15:32 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:48:32.940 18:15:32 keyring_file -- keyring/file.sh@121 -- # jq length 00:48:32.940 18:15:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:32.940 18:15:32 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:48:32.940 18:15:32 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:48:32.940 18:15:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:32.940 18:15:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:32.940 18:15:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:32.940 18:15:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:32.940 18:15:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:33.249 18:15:32 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:48:33.249 18:15:32 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:48:33.249 18:15:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:33.249 18:15:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:33.249 18:15:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:33.249 18:15:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:33.249 18:15:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:33.249 18:15:33 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:48:33.249 18:15:33 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:48:33.249 18:15:33 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:48:33.249 18:15:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:48:33.520 18:15:33 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:48:33.520 18:15:33 keyring_file -- keyring/file.sh@1 -- # cleanup 00:48:33.520 18:15:33 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.2tpHPFK5sv /tmp/tmp.sMQEtXUGPF 00:48:33.520 18:15:33 keyring_file -- keyring/file.sh@20 -- # killprocess 3085038 00:48:33.520 18:15:33 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3085038 ']' 00:48:33.520 18:15:33 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3085038 00:48:33.520 18:15:33 keyring_file -- common/autotest_common.sh@955 -- # uname 00:48:33.520 18:15:33 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:48:33.520 18:15:33 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3085038 00:48:33.520 18:15:33 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:48:33.520 18:15:33 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:48:33.520 18:15:33 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3085038' 00:48:33.520 killing process with pid 3085038 00:48:33.520 18:15:33 keyring_file -- common/autotest_common.sh@969 -- # kill 3085038 00:48:33.520 Received shutdown signal, test time was about 1.000000 seconds 00:48:33.520 00:48:33.520 Latency(us) 00:48:33.520 [2024-11-20T17:15:33.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:33.520 [2024-11-20T17:15:33.436Z] =================================================================================================================== 00:48:33.520 [2024-11-20T17:15:33.436Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:48:33.520 18:15:33 keyring_file -- common/autotest_common.sh@974 -- # wait 3085038 00:48:33.779 18:15:33 keyring_file -- keyring/file.sh@21 -- # killprocess 3083183 00:48:33.779 18:15:33 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3083183 ']' 00:48:33.779 18:15:33 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3083183 00:48:33.779 18:15:33 keyring_file -- common/autotest_common.sh@955 -- # uname 00:48:33.779 18:15:33 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:48:33.779 18:15:33 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3083183 00:48:33.779 18:15:33 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:48:33.779 18:15:33 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:48:33.779 18:15:33 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3083183' 00:48:33.779 killing process with pid 3083183 00:48:33.779 18:15:33 keyring_file -- common/autotest_common.sh@969 -- # kill 3083183 00:48:33.779 18:15:33 keyring_file -- common/autotest_common.sh@974 -- # wait 3083183 00:48:34.039 00:48:34.039 real 0m11.920s 00:48:34.039 user 0m28.786s 00:48:34.039 sys 0m2.644s 00:48:34.039 18:15:33 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:48:34.039 18:15:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:34.039 ************************************ 00:48:34.039 END TEST keyring_file 00:48:34.039 ************************************ 00:48:34.039 18:15:33 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:48:34.039 18:15:33 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:48:34.039 18:15:33 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:48:34.039 18:15:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:48:34.039 18:15:33 -- common/autotest_common.sh@10 -- # set +x 00:48:34.039 ************************************ 00:48:34.039 START TEST keyring_linux 00:48:34.039 ************************************ 00:48:34.039 18:15:33 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:48:34.039 Joined session keyring: 808289001 00:48:34.039 * Looking for test storage... 00:48:34.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:48:34.039 18:15:33 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:48:34.039 18:15:33 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:48:34.039 18:15:33 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:48:34.300 18:15:33 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@345 -- # : 1 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:34.300 18:15:33 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:48:34.300 18:15:34 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:48:34.300 18:15:34 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:34.300 18:15:34 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:34.300 18:15:34 keyring_linux -- scripts/common.sh@368 -- # return 0 00:48:34.300 18:15:34 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:34.300 18:15:34 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:48:34.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:34.300 --rc genhtml_branch_coverage=1 00:48:34.300 --rc genhtml_function_coverage=1 00:48:34.300 --rc genhtml_legend=1 00:48:34.300 --rc geninfo_all_blocks=1 00:48:34.300 --rc geninfo_unexecuted_blocks=1 00:48:34.300 00:48:34.300 ' 00:48:34.300 18:15:34 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:48:34.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:34.300 --rc genhtml_branch_coverage=1 00:48:34.300 --rc genhtml_function_coverage=1 00:48:34.300 --rc genhtml_legend=1 00:48:34.300 --rc geninfo_all_blocks=1 00:48:34.300 --rc geninfo_unexecuted_blocks=1 00:48:34.300 00:48:34.300 ' 00:48:34.301 18:15:34 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:48:34.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:34.301 --rc genhtml_branch_coverage=1 00:48:34.301 --rc genhtml_function_coverage=1 00:48:34.301 --rc genhtml_legend=1 00:48:34.301 --rc geninfo_all_blocks=1 00:48:34.301 --rc geninfo_unexecuted_blocks=1 00:48:34.301 00:48:34.301 ' 00:48:34.301 18:15:34 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:48:34.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:34.301 --rc genhtml_branch_coverage=1 00:48:34.301 --rc genhtml_function_coverage=1 00:48:34.301 --rc genhtml_legend=1 00:48:34.301 --rc geninfo_all_blocks=1 00:48:34.301 --rc geninfo_unexecuted_blocks=1 00:48:34.301 00:48:34.301 ' 00:48:34.301 18:15:34 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:48:34.301 18:15:34 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:48:34.301 18:15:34 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:34.301 18:15:34 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:34.301 18:15:34 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:34.301 18:15:34 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:34.301 18:15:34 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:34.301 18:15:34 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:34.301 18:15:34 keyring_linux -- paths/export.sh@5 -- # export PATH 00:48:34.301 18:15:34 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:34.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:48:34.301 18:15:34 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:48:34.301 18:15:34 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:48:34.301 18:15:34 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:48:34.301 18:15:34 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:48:34.301 18:15:34 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:48:34.301 18:15:34 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@729 -- # python - 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:48:34.301 /tmp/:spdk-test:key0 00:48:34.301 18:15:34 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:48:34.301 18:15:34 keyring_linux -- nvmf/common.sh@729 -- # python - 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:48:34.301 18:15:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:48:34.301 /tmp/:spdk-test:key1 00:48:34.301 18:15:34 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3085479 00:48:34.301 18:15:34 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3085479 00:48:34.301 18:15:34 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:48:34.301 18:15:34 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3085479 ']' 00:48:34.301 18:15:34 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:34.301 18:15:34 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:48:34.301 18:15:34 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:34.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:34.301 18:15:34 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:48:34.301 18:15:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:34.301 [2024-11-20 18:15:34.196294] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:48:34.301 [2024-11-20 18:15:34.196350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3085479 ] 00:48:34.562 [2024-11-20 18:15:34.273543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:34.562 [2024-11-20 18:15:34.303578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:48:35.132 18:15:34 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:48:35.132 18:15:34 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:48:35.132 18:15:34 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:48:35.132 18:15:34 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:35.132 18:15:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:35.132 [2024-11-20 18:15:34.997381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:35.132 null0 00:48:35.132 [2024-11-20 18:15:35.029433] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:48:35.132 [2024-11-20 18:15:35.029781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:48:35.392 18:15:35 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:35.392 18:15:35 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:48:35.392 479829797 00:48:35.392 18:15:35 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:48:35.392 761907818 00:48:35.392 18:15:35 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3085760 00:48:35.392 18:15:35 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3085760 /var/tmp/bperf.sock 00:48:35.392 18:15:35 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:48:35.392 18:15:35 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3085760 ']' 00:48:35.392 18:15:35 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:35.392 18:15:35 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:48:35.392 18:15:35 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:35.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:35.392 18:15:35 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:48:35.392 18:15:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:35.392 [2024-11-20 18:15:35.104815] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:48:35.392 [2024-11-20 18:15:35.104862] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3085760 ] 00:48:35.392 [2024-11-20 18:15:35.178960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:35.392 [2024-11-20 18:15:35.207152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:48:35.392 18:15:35 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:48:35.392 18:15:35 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:48:35.392 18:15:35 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:48:35.392 18:15:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:48:35.652 18:15:35 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:48:35.652 18:15:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:48:35.913 18:15:35 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:48:35.913 18:15:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:48:35.913 [2024-11-20 18:15:35.776007] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:36.174 nvme0n1 00:48:36.174 18:15:35 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:48:36.174 18:15:35 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:48:36.174 18:15:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:48:36.174 18:15:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:48:36.174 18:15:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:48:36.174 18:15:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:36.174 18:15:36 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:48:36.174 18:15:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:48:36.174 18:15:36 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:48:36.174 18:15:36 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:48:36.174 18:15:36 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:36.174 18:15:36 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:48:36.174 18:15:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:36.434 18:15:36 keyring_linux -- keyring/linux.sh@25 -- # sn=479829797 00:48:36.434 18:15:36 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:48:36.434 18:15:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:48:36.434 18:15:36 keyring_linux -- keyring/linux.sh@26 -- # [[ 479829797 == \4\7\9\8\2\9\7\9\7 ]] 00:48:36.434 18:15:36 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 479829797 00:48:36.434 18:15:36 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:48:36.434 18:15:36 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:48:36.434 Running I/O for 1 seconds... 00:48:37.818 24375.00 IOPS, 95.21 MiB/s 00:48:37.818 Latency(us) 00:48:37.818 [2024-11-20T17:15:37.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:37.818 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:48:37.818 nvme0n1 : 1.01 24376.39 95.22 0.00 0.00 5236.38 4396.37 9939.63 00:48:37.818 [2024-11-20T17:15:37.734Z] =================================================================================================================== 00:48:37.818 [2024-11-20T17:15:37.734Z] Total : 24376.39 95.22 0.00 0.00 5236.38 4396.37 9939.63 00:48:37.818 { 00:48:37.818 "results": [ 00:48:37.818 { 00:48:37.818 "job": "nvme0n1", 00:48:37.818 "core_mask": "0x2", 00:48:37.818 "workload": "randread", 00:48:37.818 "status": "finished", 00:48:37.818 "queue_depth": 128, 00:48:37.818 "io_size": 4096, 00:48:37.818 "runtime": 1.005194, 00:48:37.818 "iops": 24376.38903535039, 00:48:37.818 "mibps": 95.22026966933745, 00:48:37.818 "io_failed": 0, 00:48:37.818 "io_timeout": 0, 00:48:37.818 "avg_latency_us": 5236.377351344733, 00:48:37.818 "min_latency_us": 4396.373333333333, 00:48:37.818 "max_latency_us": 9939.626666666667 00:48:37.818 } 00:48:37.818 ], 00:48:37.818 "core_count": 1 00:48:37.818 } 00:48:37.818 18:15:37 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:48:37.818 18:15:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:48:37.818 18:15:37 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:48:37.818 18:15:37 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:48:37.818 18:15:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:48:37.818 18:15:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:48:37.818 18:15:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:48:37.818 18:15:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:37.818 18:15:37 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:48:37.818 18:15:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:48:37.818 18:15:37 keyring_linux -- keyring/linux.sh@23 -- # return 00:48:37.818 18:15:37 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:37.818 18:15:37 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:48:37.818 18:15:37 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:37.818 18:15:37 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:48:37.818 18:15:37 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:37.818 18:15:37 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:48:37.818 18:15:37 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:37.818 18:15:37 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:37.818 18:15:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:38.080 [2024-11-20 18:15:37.887253] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:48:38.080 [2024-11-20 18:15:37.887526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d64e0 (107): Transport endpoint is not connected 00:48:38.080 [2024-11-20 18:15:37.888523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d64e0 (9): Bad file descriptor 00:48:38.080 [2024-11-20 18:15:37.889524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:48:38.080 [2024-11-20 18:15:37.889531] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:48:38.080 [2024-11-20 18:15:37.889537] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:48:38.080 [2024-11-20 18:15:37.889543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:48:38.080 request: 00:48:38.080 { 00:48:38.080 "name": "nvme0", 00:48:38.080 "trtype": "tcp", 00:48:38.080 "traddr": "127.0.0.1", 00:48:38.080 "adrfam": "ipv4", 00:48:38.080 "trsvcid": "4420", 00:48:38.080 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:38.080 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:38.080 "prchk_reftag": false, 00:48:38.080 "prchk_guard": false, 00:48:38.080 "hdgst": false, 00:48:38.080 "ddgst": false, 00:48:38.080 "psk": ":spdk-test:key1", 00:48:38.080 "allow_unrecognized_csi": false, 00:48:38.080 "method": "bdev_nvme_attach_controller", 00:48:38.080 "req_id": 1 00:48:38.080 } 00:48:38.080 Got JSON-RPC error response 00:48:38.080 response: 00:48:38.080 { 00:48:38.080 "code": -5, 00:48:38.080 "message": "Input/output error" 00:48:38.080 } 00:48:38.080 18:15:37 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:48:38.080 18:15:37 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:48:38.080 18:15:37 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:48:38.080 18:15:37 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:48:38.080 18:15:37 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:48:38.080 18:15:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:48:38.080 18:15:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:48:38.080 18:15:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:48:38.080 18:15:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:48:38.080 18:15:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:48:38.080 18:15:37 keyring_linux -- keyring/linux.sh@33 -- # sn=479829797 00:48:38.080 18:15:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 479829797 00:48:38.080 1 links removed 00:48:38.080 18:15:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:48:38.080 18:15:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:48:38.080 18:15:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:48:38.080 18:15:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:48:38.080 18:15:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:48:38.080 18:15:37 keyring_linux -- keyring/linux.sh@33 -- # sn=761907818 00:48:38.080 18:15:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 761907818 00:48:38.080 1 links removed 00:48:38.080 18:15:37 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3085760 00:48:38.080 18:15:37 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3085760 ']' 00:48:38.080 18:15:37 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3085760 00:48:38.080 18:15:37 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:48:38.080 18:15:37 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:48:38.080 18:15:37 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3085760 00:48:38.341 18:15:37 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:48:38.341 18:15:37 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:48:38.341 18:15:37 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3085760' 00:48:38.341 killing process with pid 3085760 00:48:38.341 18:15:37 keyring_linux -- common/autotest_common.sh@969 -- # kill 3085760 00:48:38.341 Received shutdown signal, test time was about 1.000000 seconds 00:48:38.341 00:48:38.341 Latency(us) 00:48:38.341 [2024-11-20T17:15:38.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:38.341 [2024-11-20T17:15:38.257Z] =================================================================================================================== 00:48:38.341 [2024-11-20T17:15:38.257Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:38.341 18:15:37 keyring_linux -- common/autotest_common.sh@974 -- # wait 3085760 00:48:38.341 18:15:38 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3085479 00:48:38.341 18:15:38 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3085479 ']' 00:48:38.341 18:15:38 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3085479 00:48:38.341 18:15:38 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:48:38.341 18:15:38 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:48:38.341 18:15:38 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3085479 00:48:38.341 18:15:38 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:48:38.341 18:15:38 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:48:38.341 18:15:38 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3085479' 00:48:38.341 killing process with pid 3085479 00:48:38.341 18:15:38 keyring_linux -- common/autotest_common.sh@969 -- # kill 3085479 00:48:38.341 18:15:38 keyring_linux -- common/autotest_common.sh@974 -- # wait 3085479 00:48:38.602 00:48:38.602 real 0m4.552s 00:48:38.602 user 0m8.244s 00:48:38.602 sys 0m1.412s 00:48:38.602 18:15:38 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:48:38.602 18:15:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:38.602 ************************************ 00:48:38.602 END TEST keyring_linux 00:48:38.602 ************************************ 00:48:38.602 18:15:38 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:48:38.602 18:15:38 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:48:38.602 18:15:38 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:48:38.602 18:15:38 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:48:38.602 18:15:38 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:48:38.602 18:15:38 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:48:38.602 18:15:38 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:48:38.602 18:15:38 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:48:38.602 18:15:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:48:38.602 18:15:38 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:48:38.602 18:15:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:48:38.602 18:15:38 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:48:38.602 18:15:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:48:38.602 18:15:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:48:38.602 18:15:38 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:48:38.602 18:15:38 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:48:38.602 18:15:38 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:48:38.602 18:15:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:48:38.602 18:15:38 -- common/autotest_common.sh@10 -- # set +x 00:48:38.602 18:15:38 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:48:38.602 18:15:38 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:48:38.602 18:15:38 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:48:38.602 18:15:38 -- common/autotest_common.sh@10 -- # set +x 00:48:46.735 INFO: APP EXITING 00:48:46.735 INFO: killing all VMs 00:48:46.735 INFO: killing vhost app 00:48:46.735 WARN: no vhost pid file found 00:48:46.735 INFO: EXIT DONE 00:48:49.278 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:48:49.278 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:48:49.278 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:48:49.278 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:48:49.278 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:48:49.278 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:48:49.539 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:48:49.539 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:48:49.539 0000:65:00.0 (144d a80a): Already using the nvme driver 00:48:49.539 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:48:49.539 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:48:49.539 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:48:49.539 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:48:49.539 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:48:49.539 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:48:49.539 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:48:49.539 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:48:53.743 Cleaning 00:48:53.743 Removing: /var/run/dpdk/spdk0/config 00:48:53.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:48:53.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:48:53.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:48:53.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:48:53.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:48:53.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:48:53.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:48:53.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:48:53.743 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:48:53.743 Removing: /var/run/dpdk/spdk0/hugepage_info 00:48:53.743 Removing: /var/run/dpdk/spdk1/config 00:48:53.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:48:53.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:48:53.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:48:53.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:48:53.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:48:53.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:48:53.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:48:53.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:48:53.743 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:48:53.743 Removing: /var/run/dpdk/spdk1/hugepage_info 00:48:53.743 Removing: /var/run/dpdk/spdk2/config 00:48:53.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:48:53.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:48:53.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:48:53.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:48:53.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:48:53.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:48:53.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:48:53.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:48:53.744 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:48:53.744 Removing: /var/run/dpdk/spdk2/hugepage_info 00:48:53.744 Removing: /var/run/dpdk/spdk3/config 00:48:53.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:48:53.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:48:53.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:48:53.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:48:53.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:48:53.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:48:53.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:48:53.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:48:53.744 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:48:53.744 Removing: /var/run/dpdk/spdk3/hugepage_info 00:48:53.744 Removing: /var/run/dpdk/spdk4/config 00:48:53.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:48:53.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:48:53.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:48:53.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:48:53.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:48:53.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:48:53.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:48:53.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:48:53.744 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:48:53.744 Removing: /var/run/dpdk/spdk4/hugepage_info 00:48:53.744 Removing: /dev/shm/bdev_svc_trace.1 00:48:53.744 Removing: /dev/shm/nvmf_trace.0 00:48:53.744 Removing: /dev/shm/spdk_tgt_trace.pid2418376 00:48:53.744 Removing: /var/run/dpdk/spdk0 00:48:53.744 Removing: /var/run/dpdk/spdk1 00:48:53.744 Removing: /var/run/dpdk/spdk2 00:48:53.744 Removing: /var/run/dpdk/spdk3 00:48:53.744 Removing: /var/run/dpdk/spdk4 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2416870 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2418376 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2419231 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2420269 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2420613 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2421679 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2421949 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2422157 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2423294 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2424075 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2424468 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2424816 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2425177 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2425492 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2425719 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2426073 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2426457 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2427560 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2431120 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2431490 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2431857 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2431930 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2432542 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2432579 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2433283 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2433295 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2433656 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2433921 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2434036 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2434365 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2434813 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2435164 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2435533 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2440081 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2445283 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2457477 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2458361 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2463928 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2464424 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2469643 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2476723 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2479821 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2492682 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2503743 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2505766 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2506781 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2528406 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2533407 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2633621 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2640170 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2647036 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2654441 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2654443 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2655538 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2656763 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2657978 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2658641 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2658647 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2658972 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2658988 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2658997 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2660026 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2661033 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2662107 00:48:53.744 Removing: /var/run/dpdk/spdk_pid2662699 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2662810 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2663043 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2664412 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2665769 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2675436 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2709507 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2714888 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2716747 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2718933 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2719145 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2719282 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2719617 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2720241 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2722305 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2723376 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2723862 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2726436 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2727215 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2728146 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2733011 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2740142 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2740144 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2740145 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2744730 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2749334 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2754923 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2798360 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2803088 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2810225 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2811704 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2813327 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2815011 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2820619 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2825415 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2835038 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2835135 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2840257 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2840355 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2840645 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2841134 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2841238 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2842528 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2844439 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2846290 00:48:54.004 Removing: /var/run/dpdk/spdk_pid2848238 00:48:54.005 Removing: /var/run/dpdk/spdk_pid2850205 00:48:54.005 Removing: /var/run/dpdk/spdk_pid2852174 00:48:54.005 Removing: /var/run/dpdk/spdk_pid2859472 00:48:54.005 Removing: /var/run/dpdk/spdk_pid2860065 00:48:54.005 Removing: /var/run/dpdk/spdk_pid2861174 00:48:54.005 Removing: /var/run/dpdk/spdk_pid2862472 00:48:54.005 Removing: /var/run/dpdk/spdk_pid2868760 00:48:54.005 Removing: /var/run/dpdk/spdk_pid2871741 00:48:54.005 Removing: /var/run/dpdk/spdk_pid2878623 00:48:54.005 Removing: /var/run/dpdk/spdk_pid2885119 00:48:54.005 Removing: /var/run/dpdk/spdk_pid2894945 00:48:54.005 Removing: /var/run/dpdk/spdk_pid2903473 00:48:54.005 Removing: /var/run/dpdk/spdk_pid2903504 00:48:54.005 Removing: /var/run/dpdk/spdk_pid2926007 00:48:54.005 Removing: /var/run/dpdk/spdk_pid2926924 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2927388 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2928043 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2929433 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2929967 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2930473 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2931125 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2936135 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2936467 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2943428 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2943575 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2949901 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2954873 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2966046 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2966744 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2971730 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2972111 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2977170 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2984279 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2987099 00:48:54.265 Removing: /var/run/dpdk/spdk_pid2999023 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3009519 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3011334 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3012422 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3032244 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3036927 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3040064 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3047439 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3047444 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3053326 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3055684 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3057934 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3059244 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3061594 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3062817 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3072628 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3073287 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3073940 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3076804 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3077321 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3077958 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3083183 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3083258 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3085038 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3085479 00:48:54.265 Removing: /var/run/dpdk/spdk_pid3085760 00:48:54.265 Clean 00:48:54.265 18:15:54 -- common/autotest_common.sh@1451 -- # return 0 00:48:54.265 18:15:54 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:48:54.265 18:15:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:48:54.265 18:15:54 -- common/autotest_common.sh@10 -- # set +x 00:48:54.525 18:15:54 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:48:54.525 18:15:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:48:54.525 18:15:54 -- common/autotest_common.sh@10 -- # set +x 00:48:54.525 18:15:54 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:48:54.525 18:15:54 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:48:54.525 18:15:54 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:48:54.525 18:15:54 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:48:54.525 18:15:54 -- spdk/autotest.sh@394 -- # hostname 00:48:54.525 18:15:54 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:48:54.785 geninfo: WARNING: invalid characters removed from testname! 00:49:21.345 18:16:19 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:22.737 18:16:22 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:24.120 18:16:23 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:26.028 18:16:25 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:27.411 18:16:27 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:29.321 18:16:28 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:30.702 18:16:30 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:49:30.702 18:16:30 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:49:30.702 18:16:30 -- common/autotest_common.sh@1681 -- $ lcov --version 00:49:30.702 18:16:30 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:49:30.964 18:16:30 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:49:30.964 18:16:30 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:49:30.964 18:16:30 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:49:30.964 18:16:30 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:49:30.964 18:16:30 -- scripts/common.sh@336 -- $ IFS=.-: 00:49:30.964 18:16:30 -- scripts/common.sh@336 -- $ read -ra ver1 00:49:30.964 18:16:30 -- scripts/common.sh@337 -- $ IFS=.-: 00:49:30.964 18:16:30 -- scripts/common.sh@337 -- $ read -ra ver2 00:49:30.964 18:16:30 -- scripts/common.sh@338 -- $ local 'op=<' 00:49:30.964 18:16:30 -- scripts/common.sh@340 -- $ ver1_l=2 00:49:30.964 18:16:30 -- scripts/common.sh@341 -- $ ver2_l=1 00:49:30.964 18:16:30 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:49:30.964 18:16:30 -- scripts/common.sh@344 -- $ case "$op" in 00:49:30.964 18:16:30 -- scripts/common.sh@345 -- $ : 1 00:49:30.964 18:16:30 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:49:30.964 18:16:30 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:30.964 18:16:30 -- scripts/common.sh@365 -- $ decimal 1 00:49:30.964 18:16:30 -- scripts/common.sh@353 -- $ local d=1 00:49:30.964 18:16:30 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:49:30.964 18:16:30 -- scripts/common.sh@355 -- $ echo 1 00:49:30.964 18:16:30 -- scripts/common.sh@365 -- $ ver1[v]=1 00:49:30.964 18:16:30 -- scripts/common.sh@366 -- $ decimal 2 00:49:30.964 18:16:30 -- scripts/common.sh@353 -- $ local d=2 00:49:30.964 18:16:30 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:49:30.964 18:16:30 -- scripts/common.sh@355 -- $ echo 2 00:49:30.964 18:16:30 -- scripts/common.sh@366 -- $ ver2[v]=2 00:49:30.964 18:16:30 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:49:30.964 18:16:30 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:49:30.964 18:16:30 -- scripts/common.sh@368 -- $ return 0 00:49:30.964 18:16:30 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:30.964 18:16:30 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:49:30.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:30.964 --rc genhtml_branch_coverage=1 00:49:30.964 --rc genhtml_function_coverage=1 00:49:30.964 --rc genhtml_legend=1 00:49:30.964 --rc geninfo_all_blocks=1 00:49:30.964 --rc geninfo_unexecuted_blocks=1 00:49:30.964 00:49:30.964 ' 00:49:30.964 18:16:30 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:49:30.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:30.964 --rc genhtml_branch_coverage=1 00:49:30.964 --rc genhtml_function_coverage=1 00:49:30.964 --rc genhtml_legend=1 00:49:30.964 --rc geninfo_all_blocks=1 00:49:30.964 --rc geninfo_unexecuted_blocks=1 00:49:30.964 00:49:30.964 ' 00:49:30.964 18:16:30 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:49:30.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:30.964 --rc genhtml_branch_coverage=1 00:49:30.964 --rc genhtml_function_coverage=1 00:49:30.964 --rc genhtml_legend=1 00:49:30.964 --rc geninfo_all_blocks=1 00:49:30.964 --rc geninfo_unexecuted_blocks=1 00:49:30.964 00:49:30.964 ' 00:49:30.964 18:16:30 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:49:30.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:30.964 --rc genhtml_branch_coverage=1 00:49:30.964 --rc genhtml_function_coverage=1 00:49:30.964 --rc genhtml_legend=1 00:49:30.964 --rc geninfo_all_blocks=1 00:49:30.964 --rc geninfo_unexecuted_blocks=1 00:49:30.964 00:49:30.964 ' 00:49:30.964 18:16:30 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:30.964 18:16:30 -- scripts/common.sh@15 -- $ shopt -s extglob 00:49:30.964 18:16:30 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:49:30.964 18:16:30 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:30.964 18:16:30 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:30.964 18:16:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:30.964 18:16:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:30.964 18:16:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:30.964 18:16:30 -- paths/export.sh@5 -- $ export PATH 00:49:30.964 18:16:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:30.964 18:16:30 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:49:30.964 18:16:30 -- common/autobuild_common.sh@479 -- $ date +%s 00:49:30.964 18:16:30 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732122990.XXXXXX 00:49:30.964 18:16:30 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732122990.3lo4Xd 00:49:30.964 18:16:30 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:49:30.964 18:16:30 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:49:30.964 18:16:30 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:49:30.964 18:16:30 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:49:30.964 18:16:30 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:49:30.964 18:16:30 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:49:30.964 18:16:30 -- common/autobuild_common.sh@495 -- $ get_config_params 00:49:30.964 18:16:30 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:49:30.964 18:16:30 -- common/autotest_common.sh@10 -- $ set +x 00:49:30.964 18:16:30 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:49:30.964 18:16:30 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:49:30.964 18:16:30 -- pm/common@17 -- $ local monitor 00:49:30.964 18:16:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:30.964 18:16:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:30.964 18:16:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:30.964 18:16:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:30.964 18:16:30 -- pm/common@21 -- $ date +%s 00:49:30.964 18:16:30 -- pm/common@21 -- $ date +%s 00:49:30.964 18:16:30 -- pm/common@25 -- $ sleep 1 00:49:30.964 18:16:30 -- pm/common@21 -- $ date +%s 00:49:30.964 18:16:30 -- pm/common@21 -- $ date +%s 00:49:30.964 18:16:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1732122990 00:49:30.964 18:16:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1732122990 00:49:30.964 18:16:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1732122990 00:49:30.964 18:16:30 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1732122990 00:49:30.964 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1732122990_collect-cpu-load.pm.log 00:49:30.964 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1732122990_collect-vmstat.pm.log 00:49:30.964 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1732122990_collect-cpu-temp.pm.log 00:49:30.964 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1732122990_collect-bmc-pm.bmc.pm.log 00:49:31.906 18:16:31 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:49:31.906 18:16:31 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:49:31.906 18:16:31 -- spdk/autopackage.sh@14 -- $ timing_finish 00:49:31.906 18:16:31 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:49:31.906 18:16:31 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:49:31.906 18:16:31 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:49:31.906 18:16:31 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:49:31.906 18:16:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:49:31.906 18:16:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:49:31.906 18:16:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:31.906 18:16:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:49:31.906 18:16:31 -- pm/common@44 -- $ pid=3094506 00:49:31.906 18:16:31 -- pm/common@50 -- $ kill -TERM 3094506 00:49:31.906 18:16:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:31.906 18:16:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:49:31.906 18:16:31 -- pm/common@44 -- $ pid=3094507 00:49:31.906 18:16:31 -- pm/common@50 -- $ kill -TERM 3094507 00:49:31.906 18:16:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:31.906 18:16:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:49:31.906 18:16:31 -- pm/common@44 -- $ pid=3094509 00:49:31.906 18:16:31 -- pm/common@50 -- $ kill -TERM 3094509 00:49:31.906 18:16:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:31.906 18:16:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:49:31.906 18:16:31 -- pm/common@44 -- $ pid=3094533 00:49:31.906 18:16:31 -- pm/common@50 -- $ sudo -E kill -TERM 3094533 00:49:32.167 + [[ -n 2315744 ]] 00:49:32.167 + sudo kill 2315744 00:49:32.178 [Pipeline] } 00:49:32.194 [Pipeline] // stage 00:49:32.201 [Pipeline] } 00:49:32.217 [Pipeline] // timeout 00:49:32.222 [Pipeline] } 00:49:32.235 [Pipeline] // catchError 00:49:32.240 [Pipeline] } 00:49:32.254 [Pipeline] // wrap 00:49:32.260 [Pipeline] } 00:49:32.275 [Pipeline] // catchError 00:49:32.284 [Pipeline] stage 00:49:32.287 [Pipeline] { (Epilogue) 00:49:32.299 [Pipeline] catchError 00:49:32.301 [Pipeline] { 00:49:32.314 [Pipeline] echo 00:49:32.316 Cleanup processes 00:49:32.323 [Pipeline] sh 00:49:32.614 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:49:32.614 3094654 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:49:32.614 3095203 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:49:32.629 [Pipeline] sh 00:49:32.916 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:49:32.917 ++ grep -v 'sudo pgrep' 00:49:32.917 ++ awk '{print $1}' 00:49:32.917 + sudo kill -9 3094654 00:49:32.928 [Pipeline] sh 00:49:33.214 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:49:45.447 [Pipeline] sh 00:49:45.734 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:49:45.734 Artifacts sizes are good 00:49:45.749 [Pipeline] archiveArtifacts 00:49:45.757 Archiving artifacts 00:49:45.982 [Pipeline] sh 00:49:46.360 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:49:46.375 [Pipeline] cleanWs 00:49:46.385 [WS-CLEANUP] Deleting project workspace... 00:49:46.385 [WS-CLEANUP] Deferred wipeout is used... 00:49:46.392 [WS-CLEANUP] done 00:49:46.394 [Pipeline] } 00:49:46.409 [Pipeline] // catchError 00:49:46.420 [Pipeline] sh 00:49:46.708 + logger -p user.info -t JENKINS-CI 00:49:46.718 [Pipeline] } 00:49:46.731 [Pipeline] // stage 00:49:46.735 [Pipeline] } 00:49:46.748 [Pipeline] // node 00:49:46.753 [Pipeline] End of Pipeline 00:49:46.787 Finished: SUCCESS